DEV Community

Discussion on: How do you update backend web services without downtime?

Collapse
 
bgadrian profile image
Adrian B.G. • Edited

Hello, sorry that your first language is PHP. I was stuck in it for my first 5-6 yrs as a web developer so I can help you by doing a timeline (of the advancements done in the meantime):

A. Monolith age (1 server/VM)

  1. 1 version. You connect trough FTP and overwrite the source code. || With a small project, low amount of users and some prayers to achieve no downtime. Cons: around 1000 reasons, don't do it

  2. N versions. You create a new folder for every release, the nginx/apache points to a symlink. When you finish uploading the code you just switch the symlink to point to the new version. || You can do rollbacks, staging tests. The versions are immutable. See capistrano.

B. Horizontally scaled (multiple servers/VMS)

From this one we add a new layer of complexity (beside the local web server that listens for requests, we have a load balancer that capture the user requests and redirect them to the web servers). This allows us to have 0 downtime if the update is done correctly and the new version works.

  1. You apply 1 (hope not) or 2 but on multiple machines in the same time.

  2. Blue green deployment, LB and immutable: for each new release you create new servers, and you point the load balancer to the new version. First for only 10% of the traffic for 1 hour (random numbers). If everything is ok with the new version you put it to 50% and so on. You remove the old servers after a while.

C. containers

  1. Instead of servers you apply 4 method in containers (you can have multiple of "mini virtual machines" on the same machine).

Servers -> VMs -> Containers -> and now cloud functions, read more about them and you will understand why and how.

PS: everything is over simplified to make a point.
PS2: things get more complex when you update a relational database schema for the new version.