DEV Community

Charan Sajjanapu
Charan Sajjanapu

Posted on • Originally published at Medium

Evolution of Web Tech and Browsers

Hey, there! Have you ever wondered how exactly does web work and what really happens when you enter a URL in that mysterious browser of yours? Don’t worry, you’re not alone — most of us treat the web as some kind of black box. But since you’ve clicked on this blog, I’m guessing you might want to peek inside. That’s fantastic! Curiosity might have killed the cat, but for developers, it’s the secret sauce.

Even if you know a bit about how it works, you might still question why it evolved this way. I believe that “to understand the present and predict or influence the future, we need to know the past”. Or as some suggested, hum chronology samajhna chahiye! So let’s walk through the evolution of web tech and browsers, breaking it down into four simplified phases for clarity. By the end of this journey, you’ll understand not only how web technologies evolved, but also what happens under the hood, why these changes happened and what they mean for the future of the web.

Note : These are not exact timelines but rather simplified phases to help with understanding.

Zeroth Phase: The Prehistoric Web

Let’s rewind to a time before the 1980s

Imagine a bunch of researchers scurrying around universities in the U.S., laying down physical wires between computers to transfer or share data. These tech pioneers established protocols like FTP (File Transfer Protocol) and SMTP (Simple Mail Transfer Protocol) to share files and send emails — mostly about their groundbreaking experiments and perhaps the occasional office gossip. There used to be servers to which we could connect through a remote client, and store or fetch files on the disk using written commands.

This was good for small amounts of data, but as the data grew faster and faster, finding specific information became a real headache. Retrieving data required knowing exact paths, server addresses, and perhaps doing a little dance to appease the computer gods. Valuable information risked being lost in the digital shuffle, scattered across servers like socks disappearing in a laundry whirlpool.

First Phase: The Birth of the Web

Enter the late 1980s and early 1990s

Along came a brilliant British fellow named Sir Tim Berners-Lee. He penned a proposal called Information Management: A Proposal. In which he talked about using non-linear text systems known as “hypertext”, which means text that includes the links to the relevant information connected like a giant spider web. So navigating and exploring through related data is easier, and information loss will be minimized!

In this proposal, he also referred to interconnected computers as the “web”. And just like that, the World Wide Web was born! He didn’t stop there; he went on to invent HyperText Transfer Protocol (HTTP), develop the first browser, charmingly named WorldWideWeb (later rebranded as Nexus), the first HTTP web server, and the first website. Talk about overachieving!

  • HTTP (HyperText Transfer Protocol): A set of rules, syntax, and semantics used for the transfer of information between a client (e.g., your web browser) and a web server (the remote computer hosting the website). If you are wondering about the name, initially it was only meant for the transfer of HTML files. But it evolved to support the transfer of all types of data in later versions, after the introduction of headers, notably the Content-Type header.

  • HTTP Web Server : A computer which can understand this HTTP protocol. The main job of this is to parse the request and serve the requested response, at this point of time mostly static HTML, CSS, JPG files.

This is when HTML (Hypertext Markup Language) came into play, combining the idea of hypertext with SGML (Standard Generalized Markup Language), which was then used for formatting documents. The first version of HTML was pretty basic — it supported headings, paragraphs, lists, and links. No fancy fonts or flashy animations — just the essentials.

For the first couple of years, the web was like an exclusive club for researchers and academics. Then some clever folks developed a browser called Mosaic, which could display images. Yes, images! This made the web more accessible to the general public because, let’s face it, a picture is worth a thousand lines (no picture in this blog though 😞)!

Under the Browser’s Hood

So, let’s see what was happening inside these browsers with just above capabilities

User Interface : Every browser had a navigation bar at the top where all your open tabs (or windows back then) were visible. Below that was the address bar where you’d enter the website’s address. Below that is the place (viewport) where the contents of the website you have entered will be displayed. Remember, this was before search engines, so if you didn’t know the exact address, you were out of luck — kind of like trying to find a place without GPS or a map.

Fetching Data : When you enter a website address, the browser’s Network Module would fetch the data by performing tasks like DNS resolution and establishing a secure connection with the server to start the communication. The browser would then receive data in the form of HTML from the server.

Rendering Engine : The rendering engine would start parsing the HTML. If it encountered tags that required additional resources, like images (<img>) or styles (<style>), it would send out network requests to fetch those simultaneously.

Then it would construct a Document Object Model (DOM) tree from the HTML, where each tag became a node in the tree. After fetching and parsing the CSS, it would build a CSS Object Model (CSSOM). These two models were combined to create the Render Tree, which will be used to figure out what to display and how to display it.

  • Layout and Painting : Next came the layout phase, where the Rendering Engine calculates the size and position of each element on the page. Starting from the tag relative to the viewport (the visible area of the web page), it will work its way through the Render Tree. Finally, in the painting phase, the Rendering Engine communicates with the respective operating system's rendering API to draw everything on the screen.

Living with Limitations

By the end of this phase, users could view static websites and navigate through pages. Forms allowed basic user interactions like entering text and clicking buttons, and this form data was usually transferred through emails to developers. But here’s the catch: there was no way to change the content dynamically based on user interactions. Users could only click and navigate between provided links.

Need to update something? Fetch the whole new HTML again from the server. Want to display different content for different users? Sorry, not happening. You couldn’t sprinkle in any programming logic — no loops, no conditionals, nothing. If you wanted a navigation menu on multiple pages, you’d have to copy and paste the same code everywhere.

Second Phase: The Rise of Server-Side Scripting

Now let’s fast-forward to the mid to late 1990s

Developers started thinking, “What if we could mix programming languages with HTML to add some logic and make our lives easier?” This led to the advent of server-side scripting. Languages like Java, PHP, and Python were embedded into HTML, allowing developers to write code that could process data, make decisions, and generate HTML dynamically. Instead of serving static HTML files, the server could now tailor content for each user.

How Did This Change Things?

From the Browser’s Perspective : The browser still fetched and rendered HTML, but the content it received was more dynamic. Forms became more powerful, capable of sending data to server endpoints using POST requests.

  • Caching and Cookies : Caching became more sophisticated on browsers. They stored resources like images and stylesheets locally, reducing the need to fetch them repeatedly. Cookies were introduced to maintain state over the stateless HTTP protocol. They allowed servers to store small pieces of data on the client side, which were sent back with subsequent requests. This was crucial for things like maintaining sessions, user preferences, keeping users logged in — so you don’t have to enter your password every time you blinked.

On the Server Side : Servers got busier. Prior to this, we only had Web Servers, which used to serve static files. But at this point, a few more things got introduced like Application Server, Database server (which can store user data product catalogs, and more ), etc. They now handled scripts that could process user input, interact with databases, and generate customized HTML. This was the era when e-commerce started to bloom. Companies like Amazon and eBay could dynamically display products based on user searches, preferences, and behaviours.

  • Application Server : Web server inherently can’t run any script, so to do this work Application servers got introduced. Basically, an Application Server kind of sits behind the Web Server. Whenever a request comes, Web Server checks whether this needs to go to the Application Server based on configuration and sends the request to the Application Server. The Application Server then processes the request, executes the script, and produces an HTML file, which is then transferred to the Web Server to serve the client. So web server kind of acts like a reverse proxy to the application server.

An Example with JSP (JavaServer Pages)

In my college days, I remember tinkering with an old JSP project. These scripts generally follow a common structure: they consist of HTML, and wherever logic needs to be added, it is embedded using special identifiers such as <% (open) and %> (close) in the case of JSP. Here is a simple example:

greet.jsp

<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>
<!DOCTYPE html>
<html>
<head>
    <title>Greeting Page</title>
</head>
<body>
    <h1>Greeting:</h1>
    <p>
        Hello, <%= request.getParameter("name") != null ? request.getParameter("name") : "Guest" %>!
    </p>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

In this example, when a user submits their name, the server receives the HTTP request containing the form data, processes the information, and dynamically generates a personalized greeting for the user. No more generic “Hello, World!” Now it’s “Hello, [Your Name]!” — instant ego boost.

Dynamic Content Generation : Instead of hardcoding lists or content, you could fetch data from a database and loop through it or use it to generate HTML elements. For example, displaying a list of fruits:

<%
    String[] fruits = {"Apple", "Banana", "Cherry", "Mango"};
%>

<ul>
    <% for(int i = 0; i < fruits.length; i++) { %>
        <li><%= fruits[i] %></li>
    <% } %>
</ul>
Enter fullscreen mode Exit fullscreen mode

This could easily be expanded to fetching fruits from a database, making your content more dynamic and fresh!

The New Challenges

While server-side scripting was a game-changer, it wasn’t without its challenges.

Full Page Refreshes : Every time you interacted with the server, like submitting a form or clicking a link, the entire page had to reload because any logic could only be executed on the application server, which generated new HTML. Users had to wait for the server to respond before seeing the results of their actions. This caused a not-so-great user experience.

Server Load : Servers had to handle all the processing, from running scripts to querying databases. As websites became more popular, servers struggled under the increased load, resulting in increased load time and delays for users. We know that patience is a virtue, but it is not the one that users have in abundance. With browsers and client side computers becoming increasingly capable, it raised the question: why can’t they take on some of the workload to improve performance and responsiveness?

Third Phase: The Client-Side Revolution

Enter a time when developers were growing tired of those full-page reloads. It was time for a change, and that change came in the form of client-side scripting or Client-Side Rendering (CSR).

JavaScript, which had been quietly introduced by Netscape back in 1995, now began to take the spotlight. It enabled developers to execute code directly in the user’s browser, meaning not every interaction had to involve the server. This led to much smoother, more responsive web experiences. Several main factors were involved in this revolution:

Increased Browser Capabilities: As time passed, user devices became more and more powerful — and so did browsers. So it became obvious to offload some of the work from the server to the browser, which improved the user experience drastically. Browsers were no longer just passive document viewers; they evolved into platforms capable of running complex applications.

Web APIs: To harness this newfound power, browsers began offering Web APIs — a set of functions that allowed JavaScript to interact with the browser’s capabilities. A few main Web APIs that helped JavaScript to evolve are:

  • The DOM API provided a way to interact with and manipulate the structure and content of web pages dynamically in the browser. It also allowed adding event listeners like click, mousemove, etc., on any element, enabling developers to respond to user interactions instantly. Want to add a new paragraph when a user clicks a button? Don’t generate the whole HTML again, it’s easy peasy.
document.getElementById('myButton').addEventListener('click', function() {
  const newPara = document.createElement('p');
  newPara.textContent = 'You clicked the button! Congratulations!';
  document.body.appendChild(newPara);
});
Enter fullscreen mode Exit fullscreen mode
  • XMLHttpRequest was a game-changer. It enabled developers to make non-blocking asynchronous HTTP requests directly from code running in the browser and fetch data from server. Back in the day, data was typically transferred in XML format — hence the name — but JSON later took over because it’s less verbose and easier to work with. Eventually, the fetch API came along, offering advanced features and a cleaner syntax.

AJAX (Asynchronous JavaScript and XML): AJAX was one of the pivotal technologies behind this revolution. It explained how web pages can communicate with servers in the background to get the required data and update the content directly in browsers using both DOM API and XMLHttpRequest, without requiring a full-page reload. Suddenly, the web started to feel… interactive! Again the name came from XML, which was used for data transfer.

How Did This Change Things?

From the Browser’s Perspective: A new <script> tag was introduced in HTML, allowing developers to add JavaScript code directly in HTML. When the browser’s rendering engine parses the HTML and encounters a <script> tag, it would fetch and execute the script (order depends async and defer props), allowing for dynamic content and interactive features.

  • JavaScript Engine: To evaluate these scripts, JavaScript engines were introduced in browsers. Early engines were relatively simple and slower, but modern engines like Google’s V8 and Mozilla’s SpiderMonkey have evolved significantly, allowing for more complex client-side logic.

  • Event Loop: JavaScript was designed as a single-threaded scripting language to keep the language simple and lightweight. This design choice was also influenced by factors such as limited computing resources and keeping DOM (Document Object Model) thread-safe, preventing race conditions when manipulating the DOM.

    Because of all script execution and rendering tasks share a single thread, JavaScript needed a way to handle asynchronous operations without blocking the main thread. To achieve this, JavaScript relies on the concept of the Event Loop.

    Let’s see how it works internally:

    When the rendering engine parses the HTML and encounters a <script> tag, it fetches the script and hands it over to the JavaScript engine. The JavaScript engine begins executing the script by pushing execution contexts onto the Call Stack.

    JavaScript relies on Web APIs provided by the browser to handle asynchronous tasks without blocking the main thread. These include APIs for timers (setTimeout, setInterval), HTTP requests (fetch, XMLHttpRequest), DOM events, and more. When an asynchronous operation is invoked, the JavaScript engine hands it over to the Web API, along with a callback function, and continues executing the rest of the code. Once this asynchronous task is done browsers push the callback to respective task queue (Microtask Queue or Macrotask Queue).

    Here is how event loop manages this queues along with rendering

    1. Microtask Queue: Promises, MutationObserver callbacks, etc goes in here. This queue has the highest priority. After the call stack is empty, the Event Loop checks the microtask queue. It processes all microtasks in the queue, one after another, by pushing them onto the call stack for execution. If any microtasks add new microtasks to the queue, those are also processed before moving on.

    2. Rendering: Once the microtask queue is empty, the rendering engine may take over the main thread and perform rendering if necessary. This includes updating the UI to reflect any DOM changes made during the script execution and microtasks processing. This is done based on device frame rate, once per frame, to optimize performance.

    3 . Macrotask Queue: Callbacks from setTimeout, setInterval, DOM events, I/O events, etc goes in here. This queue as the lowest priority. The Event Loop pulls one task from the this queue and executes it. After executing this task, it processes all microtasks and rendering before pulling the next macrotask.

    [Call Stack Empty] → [Process All Microtasks] → [Render if needed] → [Execute One Macrotask] → Repeat

On the Server Side: With the client side handling more of the user interface and user interactions, servers began to shift their focus. Servers started to provide data and business logic through APIs. They evolved from serving static HTML files to becoming powerful engines that processed requests, interacted with databases, and performed complex computations to provide data. Instead of just serving for browsers, they started serving wide range of clients (Mobile or Desktop Applications, other servers etc).

REST: The concept of REST became popular. REST (Representational State Transfer) is an architecture style which provides set of guidelines and constraints for designing web services. To summarise, each resource is uniquely identified by a URL, and standard HTTP methods like GET, POST, PUT and DELETE are used to manipulate these resources in stateless client-server interactions. This helped servers to be simple, scalable and efficient.

As the applications became more popular, servers had to handle more data processing and business logic. Application servers and database servers needed to process data very efficiently. Servers had to implement a few new techniques to cope up with this heavy load, such as Server-Side Caching, Load Balancing and Scalability, Microservices Architecture, etc.

JavaScript EveryWhere: With the introduction of NodeJS (a JavaScript runtime environment), now Javascript can be run outside of the browsers, bringing the concept of “JavaScript EveryWhere” (client-side and server-side). Along with it, npm (node package manager) got introduced which helped developers to easily share JavaScript code. With these, JS ecosystem grew fast, providing all the necessary tools (frameworks, bundlers, compilers, transpilers, etc) required for the project.

Single Page Applications: With JavaScript now doing DOM creation and manipulation on the client side, there was a need for frameworks to build complex applications more efficiently. Enter frameworks like Angular, React etc. This is the peak of client-side scripting. Basically, in an SPA, only one small HTML file is fetched from the server, which consists of a bundled Javascript of entire application in one <script> tag. And this script takes care of all the user interaction and UI updates by itself including the initial render.

Still Challenges

Even though this phase solved the challenges of previous phase, it created new challenges such as:

  • Longer Initial Load Times: SPAs often meant large initial JavaScript bundles, which could slow down initial load times — especially on slower networks or devices. Even though users only want a few parts of it , they have to get the whole script — like downloading the entire movie just to watch the trailer. Developers had to employ techniques like code splitting, lazy loading, tree shaking, minification, etc to optimize this.

  • SEO Concerns: As most of the content is dynamically generated, search engines struggle to index these websites. It’s hard to get noticed when Google’s crawler can’t see your site. Techniques like Server-Side Rendering (SSR) and pre-rendering can address these issues.

  • JavaScript Fatigue: With new frameworks, tools and libraries popping up every day, developers became tired of trying to catch up. Keeping up with the latest trends is like running on a never-ending treadmill! And also choosing a right stack became very difficult with so many choices.

  • Faster, yet slower: Both server-side and client-side performance has been increasing but not as much as developers would like to impress the users. Also even though browsers are fast and capable, they are not as fast or capable as native applications. For example, you can’t build complex applications like games or video editors.

Fourth Phase: The Modern Web and Beyond

Welcome to the modern era of web development — a time when the web is more dynamic, powerful, and user-centric than ever before. Let’s discuss few existing things that are happening currently, which might solve the challenges of previous phase.

No Single Perfect Rendering Solution

After the previous phase, we found out that there is no single perfect rendering strategy for everyone. We have to choose a correct, sometimes hybrid rendering strategy based on our requirements and constraints. Here are a few such rendering strategies

  1. Static Site Generation (SSG): Unlike Client-Side Rendering (CSR), where JavaScript creates HTML in the user’s browser, SSG generates the whole HTML file at build time and serve these pre-generated static HTML, CSS, JS files to users upon request. We can even use a CDN to speed up the response.

    It solves issues like longer initial load times and SEO concerns. It also has a very fast First Contentful Paint. We don’t have to worry much about scaling issues because static files can be easily served via CDNs. We can choose this strategy if most of the content in our website is static. However, if the content is more dynamic and user-specific, then this won’t work well. In those cases, we can choose a hybrid approach using SSG and CSR with Hydration as explained below. Also you can check out the JAMStack which explains more about using SSG with Server-less APIs and Edge functions.

  2. Server-Side Rendering (SSR): It is a bit similar to the second phase, but here, the business logic is separated from the client-side script. Basically, instead of browsers running the script and generating HTML, we run the scripts on the server to generate HTML and serve it to the browser on every user request.

    Again, even though it solves the issues like longer initial load times and SEO concerns, it increases server load. Also most of us don’t want page to reload every time user interacts, so we have to go with a hybrid approach including CSR with hydration.

Hydration: Basically, we first serve the static HTML file to render and add the user interactivity by running the JavaScript after the initial render. This process of converting static web pages to dynamic web pages is called Hydration. After hydration, the application behaves similarly to a CSR application.

Hydration brings a few issues with it, users can’t interact with content as soon as they see it. They have to wait until this script is downloaded and ran, which again adds same overhead time similar to CSR. To mitigate this, there are new hydration approaches like Progressive Hydration and Partial Hydration, but these are hard to implement.

Modern frameworks like Next.js, Nuxt.js, Gatsby, and React provide these variety rendering strategies, along with new techniques like Incremental Static Regeneration and Streaming SSR.


Quite exhausting right! Yeah I know, but it is almost done. There are also many existing Web APIs that have been added or proposed. While we can’t cover all of them, but do check out some notable APIs include like Web Workers, IndexedDB, and Shared Storage.

But here are the two main Web Technologies that we can be excited about

WebAssembly (Wasm)

Even though modern JavaScript engines are working to make JS faster, the main bottleneck lies in the fact that JS is a dynamically typed and non-compiled language. This means we can’t rely on JS for really performance-heavy computations. This is where WebAssembly steps in. WebAssembly is a binary instruction format generated by the code written in languages like C, C++, and Rust, etc, which can run at near-native speed on all modern browsers without plugins.

Wasm works alongside JavaScript, allowing functions to be called between them. It doesn’t yet support direct manipulation of the DOM or usage of other Web APIs, but ongoing development aims to add this functionality soon. Wasm can be used for any performance-intensive web applications, such as gaming, image processing, video editing, and more.

Progressive Web Apps (PWA)

With this extensive web ecosystem, why can’t we build a native-like web application? Progressive Web Apps are web applications that can be installed directly from browsers and work similar to native apps, providing offline support, push notifications, etc. And they deliver fast, engaging, and reliable experiences across platforms with one codebase.

A key component of PWAs is the Service Worker , which runs background scripts separately from the main browser thread. It can act as a proxy between the web app, the browser, and the server. Service Workers can control how requests are handled and delay certain actions until the user has stable connectivity. This allows us to cache resources using the Cache API, so content can be served even when the device is offline or on a slow network, and then synced with the server once back online.

However, not all features are uniformly supported across browsers yet. And building PWA requires careful planning of caching strategies and offline behaviour. Also PWAs may not access all device capabilities compared to native apps yet, but this is expanding.


The web is not perfect yet, but as we look ahead, the lines between web, native, and desktop applications continue to blur. With so many emerging technologies, the possibilities are limitless.

This brings us to the end of our blog. Please let me know if I am wrong about something or if I missed anything. Keep exploring and keep learning. Bye!

Top comments (0)