It is the year 2024. AI has taken over the world (sort of), I shifted my focus from the web platform to Cloud Technologies and AI, but I had to come back for another round to talk about a topic I see new web devs struggling even today: asynchrony.
Many words have been put together one in front of the other by many acclaimed experts, but still, I want to share my side of the story. So let me take you back in time to 2010, an age when developer tooling for JavaScript was basically non-existing, IDE auto-completion was not a thing, and you had to carefully consider the performance impact of CSS border-radius and box-shadow, the good ol’ days.
But before diving in, why is asynchrony something we even need to consider? It so happens that JavaScript is single-threaded (I’m intentionally avoiding talking about workers here to keep things simple), meaning that it can only do one thing at a time. You may be thinking “Hmm that doesn’t sound like what I see every day. I can do many things at once: clicking here, handling events there, making requests, and updating the DOM all at the same time”, and yes, that might look like everything is being executed all at once, however, that’s not the case, it’s all part of a really efficient event loop. If you want to learn more, here’s a talk from Jake Archibald that I consider to be the best follow-up to Philip Robert’s talk on the event loop. So yep, we need to start tasks off and then have the ability to let the event loop do its thing until our results are ready.
Making an HTTP request
Nowadays you probably use a framework that abstracts away the complexity behind sending an HTTP request to a server, and to be honest, we sometimes did the same when using libraries such as jQuery, Backbone.js, or MooTools. However, from time to time we had to use plain vanilla JavaScript to communicate with a server, which meant using XMLHttpRequest
directly.
XMLHttpRequest
, allowed us to handle server requests asynchronously without requiring a page reload (you may have heard of AJAX or Asynchronous JavaScript, this is where the term comes from). This approach was fundamental in allowing background data loading without interrupting the user experience, however, even though XMLHttpRequest
was available since Internet Explorer 7 (let’s not talk about the times before that and ActiveXObject
), it wasn’t widely used ¯\(ツ)/¯. In any case, here’s how you’d make a GET request to https://example.com/api/v1/plants
var xhr = new XMLHttpRequest();
xhr.open("GET", "https://example.com/api/v1/plants");
xhr.onreadystatechange = function () {
if (xhr.readyState === 4 && xhr.status === 200) {
console.log(xhr.responseText);
}
};
xhr.send();
The first line creates a new XMLHttpRequest
object that is then prepared by specifying the method to be used, and the URL.
After that, a callback function is created where the readyState
of the client is checked (4 means the client is good to go) along with the server response. If those conditions are met, then you can check the responseText
and do something with the data you just got from the server without having to reload the page, pretty neat, right? Well, it was back then.
Oh, but what about the last line there? Well, the “ready state change “listener will only listen after the send
method is invoked, so this part here is crucial and is the one I must admit I forgot to include more than once.
Callbacks and Drawbacks
The example we just saw up there is a bit basic, but serves as the building blocks of an actual “server request” function we can include in our code to reuse later on. To do so, we will make use of callbacks, which have been a fundamental feature of JavaScript since its very beginnings. In JavaScript, functions can be passed as arguments to other functions, a feature leveraged extensively when handling asynchronous tasks.
Here’s our XMLHttpRequest
code packed into a single function we can call from different parts of our app:
function serverRequest(url, method, body, callback) {
var xhr = new XMLHttpRequest();
xhr.open(method, url);
xhr.onreadystatechange = function() {
if (xhr.readyState === 4 && xhr.status === 200) {
callback(null, xhr.responseText);
} else if (xhr.readyState === 4) {
callback(new Error("Request failed"));
}
};
if (body) {
xhr.setRequestHeader('Content-Type', 'application/json')
xhr.send(JSON.stringify(body));
} else {
xhr.send();
}
}
The XMLHttpRequest logic is abstracted away into a function we called serverRequest
. This function will take in the URL we want to interact with, the method to use, the body (if we’re sending a POST or PUT request), and finally, a callback function that will be invoked whenever the onreadystatechange
listener gets either a response or an error.
This serverRequest
abstraction is called like this:
function responseHandler(err, data) {
if (err) {
console.error(err);
} else {
console.log(':\n', data);
}
}
serverRequest("https://example.com/api/v1/plant", “GET”, undefined, responseHandler);
Here we created a simple responseHandler
function to take care of whatever the serverRequest
function returns, but in a real world scenario, you’d call different handlers for different types of requests. You oftentimes chained these callback functions when you wanted to make subsequent requests, for instance, based on the response you received, which would eventually lead to what we knew as the “callback hell”.
The “callback hell”
Also known as "the pyramid of doom," is the term used to refer to multiple nested callbacks, which lead to deeply indented code that is a headache to read and debug.
Example of Callback Hell
serverRequest("https://example.com/api/v1/plants/1", "GET", undefined, function(err, data1) {
if (err) {
console.error(err);
} else {
var dataSheetPage = data1.data_sheet_page;
serverRequest("https://example2.com/api/v1/data-sheets/" + dataSheetPage, "GET", undefined, function(err, data2) {
if (err) {
console.error(err);
} else {
var plantId = data1.plantId;
var isEdible = data2.isEdible;
var recipe = data2.recipe;
serverRequest("https://example3.com/api/v1/recipes/", "POST", {"plantId": plantId, "isEdible": isEdible, "recipe": recipe}, function(err, data3) {
if (err) {
console.error(err);
} else {
console.log("Recipes API Response: ", data3);
}
});
}
});
}
});
Despite their drawbacks, callbacks were the go-to solution for handling asynchronous operations in JavaScript for a long time because there was basically no better way of providing a method to handle asynchronous events and operations. You could avoid the “pyramid of doom” shape by using named functions (like when we used responseHandler
), but the complexity was still there. Their limitations became more apparent as applications grew in complexity, leading to the adoption of more robust solutions with structured control flows and error handling capabilities. This transition marks a significant evolution in JavaScript programming practices, aiming for greater readability, maintainability, and efficiency in managing asynchronous code.
From Callbacks to Promises
The idea of representing something that hasn’t been completed yet (successfully or otherwise) in JavaScript had existed for a long time before the official ECMAScript Language Specification was completed. There were many libraries that aimed to achieve this goal, however, the fact that there were different kinds of implementations to represent a Promise, meant that interoperability between libraries that used different approaches was a headache. Fortunately, Promises were finally introduced officially as part of the language in ES6, back in 2015.
Having a standard meant that we could all use the same syntax and terminology when handling async processes.
function serverRequest(url) {
return new Promise((resolve, reject) => {
fetch(url => {
const result = false // Simplification: here you'd bring your data from somewhere / perform an actual task to obtain it
if (result) {
resolve("Success")
} else {
reject("Failure")
}
}, 1000)
})
}
serverRequest(url)
.then(successMessage => {
// Do something on success
console.log(successMessage)
})
.catch(errorMessage => {
// Handle error
console.error(errorMessage)
})
.finally(() => {
// Do something after the async process has completed, either fulfilled or rejected
console.log('Async task perfomed.');
})
Promises significantly improved the readability of async operations, their error handling processes, and lifecycle operations. Here the the Promise’s resolve
callback passes the result of the async operation, which will be captured by the then
method later on, and on the other hand, the reject
callback function being called will make the catch
run, allowing you to define the specific logic for both cases in a separate context. The finally
method is called regardless of which of the Promise callbacks is executed, allowing you to perform operations that would happen after the async task is completed, such as updating the UI.
Continuing with our previous set of examples around HTTP requests, let’s move forward into the future to an era where the global fetch
method is present in all JavaScript engines and use the promises it returns along with the then
, catch
, and finally
methods we just saw:
function serverRequest(url, method, body) {
return fetch(url, {
method: method,
body: JSON.stringify(body),
headers: {"Content-Type": "application/json"}
})
.then(response => {
if (!response.ok) {
return response.json().catch(errorData => {
throw new Error(`Server responded with status ${response.status}: ${JSON.stringify(errorData)}`);
});
}
return response.json();
});
}
serverRequest("https://example.com/api/v1/plants/1", "GET")
.then(successMessage => {
// Do something on success
console.log(successMessage)
})
.catch(errorMessage => {
// Handle error
console.error(errorMessage)
})
.finally(() => {
// Do something after the async process has completed, either fulfilled or rejected
console.log('Async task perfomed.');
})
This is definitely better, compared to the first code snippet up top. The readability improved quite a lot but can be even better, as you will see in the next section.
Promises came along with several methods that greatly improved the developer experience when handling several async operations at the same time, such as Promise.all
which takes in an iterable (let’s say an array) of Promises. If all of the Promises are fulfilled, then a new Promise is created with the aggregate values. If at least one of them fails, the whole async operation is considered to fail and the catch method will be invoked. Other useful methods are Promise.race
, Promise.any
, or Promise.allSettled
, and you can check them out over MDN.
A new era: Async/Await
Building on promises, async/await
was introduced as a method to simplify the way we write asynchronous JavaScript code. The async
and await
operators were first released in 2016 with Chrome 55.
These new operators allow us to write code that handles async operations more straightforwardly. Let’s check out an example, and what better way to talk about a modern operator than upgrading our initial XMLHttpRequest
example code to the current standard, fetch
, and a few other “modern” JavaScript additions:
async function serverRequest(url, method, body) {
const response = await fetch(url, {
method: method,
body: JSON.stringify(body),
headers: new Headers({"Content-Type": "application/json"})
});
const data = await response.json()
return data
}
This approach reduces the boilerplate associated with promise chains and improves the legibility and maintainability of the code. But the best part comes when we compare the “pyramid of doom” with its modern counterpart:
async function pushRecipe() {
const plantData = await serverRequest("https://example1.com/api/v1/plants/1", "GET")
const dataSheet = await serverRequest(`https://example2.com/api/v1/data-sheets/${plantData.data_sheet_page}`, "GET")
const recipes = await serverRequest("https://example3.com/api/v1/recipes", "POST", {"plantId": plantData.id, "isEdible": dataSheet.isEdible, "recipe": dataSheet.recipe})
console.log(recipes)
}
pushRecipe()
But what’s actually going on here, you may be wondering? Back in the day, we may have referred to async/await
as syntactic sugar, meaning it was only an abstraction of the existing Promise methods – even though it brings more than that to the table, it is a good way to think about it to get started. The async
operator is used to “tag” a function as asynchronous, allowing us to then use the await
operator inside of it to resolve a promise and assign its result automatically. This means we don’t need to use the Promise’s then
method to return a value, that is done for us and we can simply focus on capturing that value, like in the definition of the response inside the serverRequest
function:
const response = await fetch(url, {
// …
Or when assigning the server’s response to an API call:
// …
const plantData = await serverRequest("https://example1.com/api/v1/plants/1", "GET")
// …
What about error handling?
Well, there’s a statement that’s been there from the very first days of the JavaScript language that we can use: try…catch
. Wrapping our calls to serverRequest
would capture the errors and let you handle them with as much detail as you prefer:
async function pushRecipe() {
try {
const plantData = await serverRequest("https://example1.com/api/v1/plants/1", "GET")
const dataSheet = await serverRequest(`https://example2.com/api/v1/data-sheets/${plantData.data_sheet_page}`, "GET")
const recipes = await serverRequest("https://example3.com/api/v1/recipes", "POST", {"plantId": plantData.id, "isEdible": dataSheet.isEdible, "recipe": dataSheet.recipe});
console.log(recipes);
} catch(err) {
console.error(err);
}
}
Conclusion
The evolution of asynchrony in JavaScript, from basic event handling like XMLHttpRequest
to the more sophisticated async/await
syntax, demonstrates a clear trajectory toward improving developer experience and code efficiency. Each stage in this evolution has aimed to mitigate the challenges posed by its predecessors, leading to easier-to-understand, more robust code management. This progression not only makes the language more accessible but also enhances its capability to handle the complex requirements of modern web applications, making the statement “always bet on the web” an indisputable truth.
Where to go from here?
The world is your canvas! Go ahead and create your masterpiece! But if you ask me, AbortController
is a pretty cool technology people should be taking advantage of. It allows you to abort HTTP requests (and this is HUGE, we couldn’t do that in the old times!) when a signal is fired, helping you use your resources wisely and further improving your user’s experience. Let me know in the comments if you want me to write a piece on that topic :D
Top comments (0)