Today we're discussing how you can create a fully automated, virtualized development environment. One that's customizable, and ready to use in minutes.
In this article, I'll show you how to set up such an environment using PHP and Laravel, though the principles can be applied to your preferred tech stack.
We'll dive into creating a robust setup powered by VirtualBox and Vagrant, featuring:
- Apache Gateway
- PHP with Apache for front-end
- Laravel API
- Redis Cache with Redis Commander
- MySQL database with PHPMyAdmin
By the end of this guide, you'll have a secure, containerized environment accessible via specific routes, with core services in a private Docker network.
Furthermore, it’s very easy to connect Visual Studio Code to VirtualBox, and start your programming workflow.
Let's get started!
How does it all work?
TLDR: Skip to step 4. A - Installation, to get right into setting this up for your system.
There are 4 parts to this solution I’ll discuss today. I’ll finish with the Development workflow.
- 1, Apache as a Gateway (Reverse Proxy)
- 2. Docker with Docker compose
- 3. Vagrant with VirtualBox
- 4. Development workflow
If you like more in-depth articles on Apache, Docker, or Vagrant?
Let me know in the comments below, and Connect with me on Twitter.
1, Apache as a Gateway (Reverse Proxy)
Apache 2 is a HTTP server with a modular setup. An HTTP server in essence serves files according to the HTTP protocol of which there are two major versions currently used across the web, http v1.1 and http v2
Apache is typically used as an HTTP server to serve files, however in this case we’re using it in Proxy “mode” primarily and specifically as a Reverse Proxy (also known as a Gateway).
The reverse proxy acts like a regular web-server, and will decide where to send the requests and then returns the content as if it was the origin server.
Typical use-cases:
- Load balancing. Is a topic for another day.
- Provide access to servers behind a firewall. This is what we’re doing today.
ProxyPass "/foo" "http://foo.example.com/bar"
ProxyPassReverse "/foo" "http://foo.example.com/bar"
ProxyPass
is a remote server mapping to the local path. This is considered in Apache as a “Worker”. This is an object that holds its own connections and configuration associated with that connection.
ProxyPassReverse
ensures that the return headers generated from the backend are modified to point to the Gateway path instead of the origin server.
These are the basic 2 directives to create mappings that appear to orginate from the Reverse Proxy.
Now you need to map these to specific Locations.
<Location "/api/">
ProxyPass http://api:80/
ProxyPassReverse http://api:80/
</Location>
A Location
directive operates on the URL or webpages that are generated only! Not file system paths.
In this example api
is a named server in the docker-compose
file that is exposed
to port 80.
within a Docker network you can reference containers by their
name
The api
service needs to be accessible on the reverse proxy at the URL path /api/
. So that means, you can access the API in your browser at: http://your-ip/api/
To make sure that the api
and the other Locations are made available on port 80 we need to encapsulate the Location
and ProxyPass
, ProxyPassReverse
in a VirtualHost
directive like this:
<VirtualHost *:80>
ServerName ${SERVER_NAME}
ProxyPreserveHost On
# Laravel Frontend app server on root path
ProxyPass / http://frontend/
ProxyPassReverse / http://frontend/
<Location "/api/">
ProxyPass http://api:80/
ProxyPassReverse http://api:80/
</Location>
# ... other directives ....
</VirtualHost>
The VirtualHost
is a grouping based on an ip-address
or wildcard, match-all *
and a port number. For that grouping you can override global directives such as Location and Directory.
ProxyPreserveHost On
: This directive ensures that the original Host header is passed to the backend server, which can be important for applications that rely on the Host header for their logic.
2. Docker with Docker compose
Fun note about their logo, it’s a whale with shipping containers. Matching perfectly with the core idea of the product.
Docker is software that can package other software in a concept called a Docker Container. This container is portable across many different operating environments such as Linux and Windows.
Docker provides a way to isolate the container environment (the Guest machine) from the operating system (the Host machine).
This allows for many different kinds of software and operating environment to run within the containers without interfering with the host machine.
As seen in this diagram, Docker requires a lot of work from the Host Operating System. The benefit is that the containers can be small, since much of the code and the work is happening underneath the containers.
Docker compose
Docker compose is the “compositor” document standard to create a Docker network with one or more Docker containers.
Below is an example of a docker-compose yaml document:
- for both the
gateway
anddb
service we configure:-
image
: references an image that must be on DockerHub. -
ports
: is an “outside:inside” mapping. Where outside means outside the Docker network (Your host machine or in this case, Virtual Machine) - In this case we use the same port outside of the Docker network to access this service as inside the network.
-
expose
is similar toports
, except this port is not available outside the Docker network -
volumes
: are mounts, again it’s an “outside:inside” mapping. -
./apache-config
is the git repo directory where ourhttpd.conf
lives. That’s mapped straight to the Apache 2 configuration file inside the container. -
depends_on
: this service needs to wait until these other named services are online. -
environment
are environment variables available inside the container. Typically used to set passwords, ports, and username configurations.
-
services:
gateway:
image: httpd:2.4
container_name: gateway
ports:
- "80:80"
volumes:
- ./apache-config/httpd.conf:/usr/local/apache2/conf/httpd.conf
depends_on:
- frontend
- api
- redis
- redis-ui
- phpmyadmin
environment:
SERVER_NAME: ${APACHE_SERVER_NAME}
db:
image: mysql:5.7
container_name: db
expose:
- "3306"
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- db-data:/var/lib/mysql
# ... other services here
volumes:
db-data:
To run this file we can use: docker-compose up --build -d
This will: - bring the containers online up
- build the images --build
- run it in the backgroud -d
To take all of the containers offline, we simply type; docker-compose down
To manage containers easily from the CLI, I use dry
, it provides an easier way to see statistics on all containers (memory and cpu usage) as well as a log viewer that can be searched, and much more.
Summary of useful Docker commands:
-
docker ps
show all running Docker containers. -
docker-compose up --build -d
: build the Docker images, bring the containers online and run them in the background. -
docker-compose down
bring the containers offline. -
docker-compose stop
stop the containers. -
docker-compose restart <name>
restart the specified Docker container. -
dry
run the Docker manager and monitoring / logs command line utility-
F2
show all containers (stopped and running) - select container
enter
>feth logs
>enter
>f
to tail the logs. - select container
enter
>fetch logs
>30m
>f
show logs from the last 30 minutes and tail.
-
3. Vagrant with VirtualBox
Vagrant can create complete development environments on Virtual Machine (VM) in the cloud and on your local machine with a simple workflow.
Vagrant provides consistent environments such that code works regardless of what kind of systems your team members use for their development, or creative, work.
Vagrant is in my opinion ideal to setup different development environments that require different dependencies, really quickly. Additionally, because it’s all in code, you can easily adapt the environments to match evolving requirements for your use cases.
For this solution we use a base image generic/ubuntu2204
to build the Virtual Machine with using a Vagrantfile
.
A Vagrantfile is, similar to a Dockerfile, because it provides instructions for the Vagrant program how to create the virtual image (using the base image box
and provisioners) and what to use to run it on (a provider, in this case VirtualBox).
A quick run down of what this all means using an example (shortened version):
-
Vagrant.configure("2")
denotes the version of Vagrant that we’re using;2
. -
config.vm.box = "generic/ubuntu2204"
the box specification running Ubuntu 22.04. -
config.vm.provider "virtualbox"
we use VirtualBox to run the VM. -
config.vm.provision "shell", inline:
we use bash scripts to install our environment. -
config.vm.synced_folder
: enables the VirtualBox built in folder sharing, requires VBox Guest Editions to work. -
config.vm.provider "virtualbox"
has VirtualBox specific configurations such as thecpu
memory
andname
of the virtual machine.
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# Variables
git_user_email = 'youremail@mail.com'
# boxes at https://vagrantcloud.com/search.
config.vm.define "docker-apache" do |dockerApache|
config.vm.box = "generic/ubuntu2204"
# via 127.0.0.1 to disable public access
config.vm.network "forwarded_port", guest: 80, host: 80
config.vm.network "public_network"
# Share an additional folder to the guest VM.
config.vm.synced_folder "./data", "/vagrant_data"
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
# vb.gui = true
# Customize the amount of memory on the VM:
vb.memory = "8192"
vb.name = "docker-apache"
vb.cpus = 6
end
config.vm.provision "shell", inline: <<-SHELL
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
su - vagrant << EOF
# clone repo
mkdir -p /home/vagrant/docker-apache
cd /home/vagrant/docker-apache
git clone https://github.com/rpstreef/docker-apache-reverse-proxy .
EOF
SHELL
end
end
To create a VM out of a Vagrantfile
we can simply run vagrant up
, this will start the process of download the box
and then the provisioning
step with the bash scripts to the provider
VirtualBox.
When it’s all finished, connect using vagrant ssh
and start using it!.
Summary of useful Vagrant commands
-
vagrant up
this will create the Virtual Machine. -
vagrant validate
used to verify theVagrantfile
is semantically correct. -
vagrant halt
to stop the VM, use--force
to shut it down immediately. -
vagrant ssh
connects to the VM via the command-line. -
vagrant destroy
completely remove the Virtual Machine.
4. Development workflow
A - Installation
Now that it’s clear how the solution parts work, let’s go and install it:
- Git clone https://github.com/rpstreef/flexible-dev-environment in your local projects directory.
- Install Vagrant using these instructions
- Install VirtualBox using these instructions.
- Edit the
Vagrantfile
:- You want to use GitHub on the Virtual Machine?\
- Change the
git_user_email
andgit_user_name
values. - There’s an additional step to complete after the VM is installed. See How to setup your Personal Access Token (PAT) for GitHub:.
- Change the
- Check machine settings in this block;
config.vm.provider "virtualbox"
. Adjustvb.memory = "8192"
andvb.cpus = 6
the number of processors.
- You want to use GitHub on the Virtual Machine?\
- From the Git folder, run
vagrant up
, this will setup the Virtual Machine with VirtualBox- When asked which network, choose the adapter with internet access.
- Connect to the Virtual Machine, run
vagrant ssh
.- Take note of the
ip address
and use that to connect to with your browser. Or on the CLI type:ip address
and look for the network adapter name you chose earlier.- For the Web Landing-page:
http://virtual-machine-ip/
- For the Redis Commander UI
http://virtual-machine-ip/cache
- For PHPMyAdmin:
http://virtual-machine-ip/phpmyadmin
- For Laravel API:
http://virtual-machine-ip/api
- For the Web Landing-page:
- Take note of the
- When connected to the VM:
- Execute
dry
on the command line and you should see several containers running. - Refer to the summary of useful Docker commands chapter for more guidance with Docker commands.
- and this summary for Vagrant commands.
- Execute
B - Accessing the web application services
The Apache Gateway provides access via port 80 with your browser to only the parts that need to be exposed to the outside:
- Front-end →
http//ip-address/
- Redis Commander →
http//ip-address/cache/
- Laravel API →
http//ip-address/api/
- PHPMyAdmin →
http//ip-address/phpmyadmin/
That means that the Redis and MySQL services are not accessible directly from outside the docker network (private network).
C - Developing with VSCode
1. Configure SSH Access
Configure the ssh access on your machine with vagrant is really easy to do, just run vagrant ssh-config
, copy paste the text into your ~/.ssh/config
file.
From the command line, anywhere, you can connect to your VM with ssh docker-apache
To get the IP address from your VM, run ip address
and check which adapter you used for the network access, and take note.
2. Connect to VirtualBox with VSCode
When this works, we can connect VSCode:
- Open up VSCode
- Click on the lower left icon,
Connect current window to Host
then enterdocker-apache
. This name was retrieved from the configuration we did in step 1. - This will install the VSCode server files on the virtual machine.
- Open the directory
/home/vagrant/docker-apache
-
Yes I trust the authors
, when asked.
To get GitHub to work, we just need to add our Personal Access Token in the next step.
D - Updating code with GitHub.
To setup the Personal Access Token for GitHub, do the following:
- Create a PAT here: https://github.com/settings/tokens
- For most cases, repositories only is sufficient (unless you want to activate GitHub Actions?):
- Check the
repo
checkbox
- Check the
- Execute the below on the command-line to set your
Personal Access Token
for GitHub:
git credential-store store <<EOF
protocol=https
host=github.com # Replace with your Git provider's hostname
username=<your-username> # Replace with your Git username
password=<personal-access-token>
EOF
these are all stored in cat ~/.git-credentials
E - (Extra) GitOps with GitHub Actions
With GitHub Actions you can automate, for example;
- docker-compose deployment to your favorite public cloud
- convert docker-compose to Kubernetes and then deploy to your cluster.
- or perform other automation as per the standard workflows offered by GitHub, to get started:
- Fork my repository: https://github.com/rpstreef/flexible-dev-environment
- Then go to
Actions
, there’s a list presented of all kinds of ready made automations.
If you’d like more details on how to automatically deploy using GitHub Actions (GitOps), give me a shout on Twitter or in the comments.
Conclusion
Setting up a virtualized development environment might seem daunting at first, but the benefits are well worth the effort. With this setup, you've gained a powerful, flexible, and secure platform for your development work.
I'm curious to hear about your experiences down in the comments:
- How does this compare to your current development workflow?
- Do you see yourself adopting a similar setup, or have you already implemented something like this?
- What other tools or services would you add to enhance this environment?
If you found this guide helpful, consider following me on Twitter for more tech tips and discussions.
For IT professionals looking to balance career growth with personal well-being, I invite you to join our community, The Health & IT Insider. We cover a range of topics from DevOps and software development to maintaining a healthy lifestyle in the tech industry.
Thanks for reading, and see you in the next one!
Top comments (0)