DEV Community

Cover image for How to run a local webserver nowadays
Aurélien Delogu
Aurélien Delogu

Posted on • Updated on

How to run a local webserver nowadays

In the early 2000s, when we wanted to develop our website we generally ended up installing Apache locally with PHP, MySQL, and PhpMyAdmin (hey, I heard someone said "eew!" in the assembly). This setup could take several hours to days, depending on what you wanted to achieve (mostly with Apache's config files).

You could also install EasyPHP or WAMP to save you a lot of time, but these tools always came with counterparts, and nowadays a good app/website should be contained in its own project directory (e.g. git repository).

In 2020 (OMG, am I that old?!), development tools have improved a lot. It is not needed anymore to install a local webserver with a complex configuration. Furthermore, to maximize the same behavior across machines, project's dependencies and resources should be run in isolation and not leak in the system.

Let's see how to achieve it with some of the most trending languages!

The quickest not so dirty way

These different methods will rely on having a static website or internal routing in your application. Any more complex routing/rewriting will, of course, require a real webserver. The same to note if you need to use a database that you cannot mock. These specific needs are handled in the next section.

Also, note that all the following commands are creating a server accessible at http://localhost:8080/ serving the current directory and that I didn't list all possible alternatives from popular frameworks (you should already know how to use them, right?) or any available package to keep this article as short as possible.


I encourage the use of a Makefile (or any other tool like NPM scripts) so you don't have to bother anymore with that "what is the command already?" perpetual question when switching between different projects. For example:

Then, you'll always have the same command to type across your projects: make serve.

For static websites

You can use the http-server NPM package. It's a very powerful web server with support for Gzip, SSL, or catch-all routing. Just install it in your project's dev dependencies and run it with npx http-server.

There are also several other web server libraries like glance or harp.

For Deno projects

You can use the standard file_server lib:

$ deno install -f -allow-net -allow-read file_server https://deno.land/std/http/file_server.ts
$ ~/.deno/bin/file_server -port 8000
Enter fullscreen mode Exit fullscreen mode

For PHP 5.4+ apps

You can use the PHP binary itself with: php -S localhost:8080.

Also: if you don’t have migrated to PHP 7 yet, please do.

For Ruby projects

Starting with 1.9.2, use ruby -run -e httpd . -p 8080.

With older versions, use ruby -rwebrick -e'WEBrick::HTTPServer.new(:Port=>8080,:DocumentRoot=>Dir.pwd).start'.

With Python

Like PHP, Python has also its integrated server!

With Python 2: python -m SimpleHTTPServer 8080.

With Python 3: python3 -m http.server 8080.

With Go language

You can use spark:

$ go get github.com/rif/spark
$ spark -port 8080
Enter fullscreen mode Exit fullscreen mode

In Crystal

We can make use of a standard library:

crystal eval 'require "http/server"; HTTP::Server.new(8000, HTTP::StaticFileHandler.new(".")).listen'
Enter fullscreen mode Exit fullscreen mode

For Rust projects

I did not forget our Rustaceans folks 😉️

$ cargo install https
$ http
Enter fullscreen mode Exit fullscreen mode

You can also use miniserve.

Finally, with Elixir

elixir --no-halt --app inets -e ":inets.start(:httpd,[{:server_name,'s'},{:document_root,'.'},{:server_root,'.'},{:port,8080}])"
Enter fullscreen mode Exit fullscreen mode

The longest and reliable way

Maybe you already know Docker? It's a container management tool vastly used nowadays to ship apps with their underlying stack (like a database, a server, a log manager, etc…).

To test an application locally, creating a Docker image with the same configuration as your production server is clearly the preferred and cleaner way. Having your resources containerized minimizes the side effects of different environments with your project and makes your team happy.

For our example, I won't enter into all the details of the creation of an image, but I created a Github repository with a sample website to show you how it is done. This example covers a docker image with Nginx and PHP7: https://github.com/pyrsmk/docker-nginx-example. But the same principle is easily customizable to Apache and any other language.


For the following, you'll need to install docker on your system. When it's done, you must add your user to the docker group:

$ sudo usermod -aG docker ${USER}
$ su - ${USER}
Enter fullscreen mode Exit fullscreen mode

Now, create an account on DockerHub, and log in to it.

Done? Now let's take a look at what our image looks like.


Here's our Dockerfile:

With the comments, it's pretty straightforward to understand what is going on, but there are several things to note here:

  • we're using Linux Alpine to build our image because it is small as hell (contrary to Ubuntu) so it builds and runs fast
  • to better handle services in our image we installed S6-Overlay
  • we added bash as a dependency for debugging purpose
  • we're overwriting configuration files directly because it's simpler than processing each file separately; those configuration files are stored in the etc folder on our example repository

The other interesting file is the Makefile (the following example is the concatenation of both Makefiles from the example repository, for simplicity):

The code for the build task seems a bit complicated but it's because we want to automate all other tasks since it's better to set a version number for each image we're building. It's not really needed and this code can be simplified, but why not begin with good practices?

Let's explain a bit what the tasks are doing:

  • make build: it shows us the current Docker image version and asks us for the new version, then builds it
  • make publish: it publishes our image on DockerHub, especially useful when working on a team but also to make backups
  • make bash: really useful when we need to debug when we're working on our image (Dockerfile, configuration files, etc…), it runs bash inside the container
  • make serve: this is what we're interested in, it runs the image and exposes the inner server to localhost:8080

Note that you shouldn't/won't be able to run the commands from the repository directly as they rely on my own account pyrsmk 😉️


We're done for today!

If you have any questions, do not hesitate to ask as I know that Docker is not easy stuff to step in!

If you're interested to learn more about Docker, there are many useful official resources to read.


You can subscribe to my mailing list.

If you appreciate my work, you may want to support me for the small price of a coffee ☕️ via Ko-fi.

Oldest comments (0)