Calling another API from Node.js is straightforward.
Starting with Node.js version 18, a web-compatible fetch()
function is available out of the box. However, as it is still marked as experimental
, there are battle-tested libraries like Axios suitable for production environments, offering convenient APIs and a comprehensive feature set:
import axios from 'axios';
const http = axios.create({
baseURL: 'https://example.com/api'
});
export function getUser(userId: string) {
// An equivalent to `GET /users?id=12345`
return http.get('/user', {
params: {
id: 12345
}
});
}
That's it. Happy coding!
Expanding the Scope
With the basics covered, let's discuss how to scale this solution. Not in terms of algorithmic or space complexity - which is often the focus of job interviews - but in terms of scaling it for a large application that will be worked on by a team of developers over several years. This is the kind of scaling we deal with in day-to-day work.
To illustrate this, we'll use a real-world API from a digital bank with public documentation: https://docs.solarisgroup.com/api-reference
Disclaimer: I am not affiliated with Solaris SE. I just happened to be familiar with this API. Using realistic examples can be beneficial.
Fetching Person Data
Let's begin with a basic getPerson call:
import axios from 'axios';
const http = axios.create({
baseURL: process.env.API_BASE_URL
});
export function getPerson_v0(personId: string) {
return http.get(`v1/persons/{id}`);
}
As you can see, it's not significantly different from the code in the introductory section. However, considering the returned value (see the response object in the docs), you might spot some issues.
Firstly, axios
returns its own response object. Passing this up the stack isn't ideal as our callers (users of the API client we make) don't need to know what HTTP library we're using. We want to retain the flexibility to replace it later. Moreover, the users are primarily interested in the Person
object itself, which is contained in the data
property:
export function getPerson_v1(personId: string) {
return http
.get(`v1/persons/{id}`)
.then((response) => response.data);
}
Now, let's consider the Person
value itself:
- Keys are in
snake_case
, which can lead to warnings fromESLint
and complaints from other developers because our standard iscamelCase
. - Dates are represented as strings.
export async function getPerson_v2(personId: string) {
return http
.get<Person>(`v1/persons/{id}`)
.then((response) => response.data)
.then(snakeToCamelCase)
.then(restoreDates);
}
In the above code, snakeToCamelCase
and restoreDates
are utility functions that iterate through the object's keys and map either the key or the value, depending on what needs to be corrected. The details of their implementation are not the focus of our discussion.
I've also added <Person>
to the type parameters. Naturally, we need to create interfaces and enumerations for all data that we send or retrieve. These could be generated from an OpenAPI spec or created manually, depending on what the docs provide.
An Unexpected Value
We restore date values, but we don't need to restore enumerations like employment_status
as they are just strings. But what if a new unexpected value appears in the response that we don't have in our EmploymentStatus
? Or even worse, some other field becomes a string
instead of a number
. We'll probably corrupt the application state at runtime.
An API is a living creature; it changes over time. It also has bugs (everything has bugs). We need to defend our code from invalid response data.
Schema Validation
Schema validation can ensure that the data your application is receiving is of the correct type and format. This can help you catch bugs and discrepancies in the API, as well as protect your application from potential crashes or corrupt states due to unexpected data.
For Node.js, libraries such as Joi
, Yup
, Zod
are often used. I will be using Runtypes.
import * as t from 'runtypes';
export const PersonSchema = t.Record({
// Other fields are omitted for simplicity
id: t.String,
first_name: t.String,
last_name: t.String,
email: t.String,
birth_date: t.InstanceOf(Date),
employment_status: EmploymentStatusSchema,
});
export type Person = t.Static<typeof PersonSchema>;
export async function getPerson_v3(personId: string) {
return http
.get(`v1/persons/{id}`)
.then((response) => response.data)
.then(snakeToCamelCase)
.then(restoreDates)
.then((person) => PersonSchema.check(person));
}
Calling .check(person)
either returns back a typed Person or fails with an error.
Authentication
We need to address authentication as well. The API is protected by a bearer token, which you can get via the oAuth endpoint. The token has an expiration time, and we will need to refresh it from time to time. I'm going to encapsulate all business logic about tokens in the TokenManager
component. Read as "I will mostly skip it":
export class TokenManager {
getToken(): Promise<string> {
// ...
return token;
}
close(): void {
// ...
}
}
It is really a separate client, which knows how to call the token endpoint and is using setTimeout
to refresh the token prematurely. getToken
returns a promise with the token value.
Axios provides interceptors
as a mechanism, which we will utilize for injecting the token:
const tokenManager = new TokenManager();
async function injectNewToken<
C extends InternalAxiosRequestConfig | AxiosRequestConfig
>(config: C): Promise<C> {
const token = await tokenManager.getToken();
if (typeof config.headers?.setAuthorization == "function") {
config.headers.setAuthorization(`Bearer ${token}`);
return config;
}
return {
...config,
headers: {
...config.headers,
Authorization: `Bearer ${token}`,
},
};
}
httpClient.interceptors.request.use(async (config) => {
return injectNewToken(config);
});
httpClient.interceptors.response.use(undefined, async (error) => {
if (isAxiosError(error) && error.response?.status === 401 && error.config) {
return injectNewToken(error.config).then((c) => httpClient.request(c));
}
throw error;
});
The code is simplified, but what we're doing is:
- Before actually dispatching a request, we ask
TokenManger
to give us a valid token, then we add it as theAuthorization
header to the request config. - In case of an error response with a 401 status code, we know from the documentation that the token was expired by the time it arrived at the API, and we need to get a new one and repeat the request.
Errors
The API can respond with more than just 401 errors. Even more importantly, there's a whole page describing the error response format. At a bare minimum, we need to parse the response and throw something like new Error(responseError.code)
. But we will need the error ID when we write to their support (and we will, a lot). Error description is helpful as well for debugging the issues.
The users of our client library should not dig into Axios errors and responses to understand that the person is not found. They should be able to easily pattern match on different error situations.
Introducing custom errors and error mapping. This is where I will be simplifying even more. Otherwise, it will be a thousand lines of code. For example, I will assume that there could be only 1 error in the response. And we will map only error codes, ignoring status codes and possible combinations.
Errors begin with making types and schemas for server error responses:
export enum SolarErrorCode {
METHOD_NOT_ALLOWED = "method_not_allowed",
MODEL_NOT_FOUND = "model_not_found",
UNAUTHORIZED_ACTION = "unauthorized_action",
}
export const SolarErrorDataSchema = t.Record({
// Other fields are omitted for simplicity
id: t.String,
status: t.Number,
code: t.String,
detail: t.String,
});
export const SolarErrorResponseSchema = t.Record({
// Other fields are omitted for simplicity
errors: t.Array(SolarErrorDataSchema),
});
After that we can define our own error class, which will be used by the code calling our client:
export enum ClientErrorCode {
CONFIG_ISSUE = "config_issue",
MODEL_NOT_FOUND = "model_not_found",
NO_CONNECTION = "no_connection",
UNAUTHORIZED = "unauthorized",
UNEXPECTED_ERROR = "unexpected_error",
}
export class ClientError extends Error {
static is(value: unknown): value is ClientError {
return value instanceof ClientError;
}
static mapCode<R>(codeMap: { [key: string]: (error: ClientError) => R }) {
return function clientErrorMapper(error: unknown) {
if (ClientError.is(error)) {
const remapper = codeMap[error.code] ?? codeMap["_"];
if (typeof remapper === 'function') {
return remapper(error);
}
}
throw error;
};
}
id?: string;
code: string;
constructor(
errorData: SolarErrorData | Pick<SolarErrorData, "detail" | "code">
) {
super(errorData.detail);
if ("id" in errorData) {
this.id = errorData.id;
}
this.code = errorData.code;
}
}
This is how to pattern match a ClientError
:
const person = await getPerson("id").catch(
ClientError.mapCode({
[ClientErrorCode.CONFIG_ISSUE]: (e) => {
throw new Error(`Another error: ${e.code}`);
},
[ClientErrorCode.UNEXPECTED_ERROR]: () => 123 as const,
})
);
Here users would be able to remap client errors to their own internal errors. Also, if you look at the inferred type of the person
, it would be Person | undefined | 123
.
But where does this ClientError
come from? We parse error responses and map them into appropriate values (like undefined
instead of 404) or ClientError
instances depending on what is required. The goal is to hide the transport or API-specific stuff.
Here is the helper. Most of the conditions (if
blocks) come from how you should handle Axios errors:
export function mapAxiosError<R>(errorMap: {
[key: string]: (() => R) | ClientErrorCode;
}) {
return function axiosErrorMapper(error: unknown): R {
if (!isAxiosError(error)) {
throw error;
}
if (error.response) {
const data = SolarErrorResponseSchema.check(error.response.data);
const firstError = data.errors[0];
const valueMapper = errorMap[firstError.code] ?? errorMap["_"];
if (typeof valueMapper === "function") {
return valueMapper();
}
throw new ClientError({
id: firstError.id,
code:
typeof valueMapper === "string"
? valueMapper
: ClientErrorCode.UNEXPECTED_ERROR,
detail: firstError.detail,
status: firstError.status,
});
} else if (error.request) {
throw new ClientError({
code: ClientErrorCode.NO_CONNECTION,
detail: "Connection failed, please check your network",
});
} else {
throw new ClientError({
code: ClientErrorCode.CONFIG_ISSUE,
detail: "Request configuration is invalid",
});
}
};
}
The helper again, but applied to getPerson
:
export async function getPerson_v4(personId: string) {
return httpClient
.get(`v1/persons/${personId}`)
.then((response) => response.data)
.then(snakeToCamelCase)
.then(restoreDates)
.then((person) => PersonSchema.check(person))
.catch(
mapAxiosError({
[SolarErrorCode.MODEL_NOT_FOUND]: () => undefined,
[SolarErrorCode.UNAUTHORIZED_ACTION]: ClientErrorCode.UNAUTHORIZED,
_: ClientErrorCode.UNEXPECTED_ERROR,
})
);
}
Now, half of the code in the getPerson
would be repeated in other API functions, so we need to extract that code:
export interface TypedRequestConfig<T> extends AxiosRequestConfig<T> {
schema?: Runtype<T>;
}
export async function makeRequest<T>(
config: TypedRequestConfig<T>
): Promise<T> {
const response = await httpClient.request(config);
const result = snakeToCamelCase(restoreDates(response.data));
if (config.schema) {
return config.schema.check(result);
}
return result;
}
Which removes a lot of duplication:
export async function getPerson_v5(personId: string) {
return makeRequest({
method: "GET",
url: `v1/persons/${personId}`,
schema: PersonSchema,
}).catch(
mapAxiosError({
[SolarErrorCode.MODEL_NOT_FOUND]: () => undefined,
[SolarErrorCode.UNAUTHORIZED_ACTION]: ClientErrorCode.UNAUTHORIZED,
_: ClientErrorCode.UNEXPECTED_ERROR,
})
);
}
Wrapping up
I bet you are already tired of writing code. But there's still so much to do:
- A circuit breaker and retries. If the network fails, it is convenient to automatically retry. But you need to control this behavior and prevent infinite loops, especially with 401 responses and invalid tokens.
- Logs for debugging. Bad things happen, and you will need to debug the app, probably even in a live environment. It is nice to have some way to enable logs showing requests and responses.
- Telemetry. You probably want to collect response times and other data and sent it to Prometheus or another system.
- Request overrides. There should be a way to attach additional headers, set a specific timeout, etc., for every API function individually.
- Distributed tracing. This could be done through request overrides, but you probably want to set it up once during client initialization.
- Get rid of
process.env
. Use a reliable config component like envalid. Uselodash
or another library instead oftypeof
. - Who knows what else.
What I want you to understand is that making a production-grade client is difficult. You can definitely call an API from Node.js as shown at the beginning of this post. But after a couple of years of work of multiple developers or even teams, the app and the client will become enormous. Unmanaged and accidental complexity will silently waste the time of every developer when they add a new function, update it, or fix a bug. And it will not be visible on Jira charts. So, at some point in time you will need to refactor and you may use this post for inspiration.
Bonus Content
If you look at the getPerson_v4
, you might notice that response processing is like a data pipeline with multiple steps. A successful response has its own steps, while an error response has its own. These steps are simple.
Speaking of steps, if we were to create a function for something like POST
(where we send data to the API), we would have one more pipeline. This pipeline would process the request body: converting camelCase
to snake_case
, and so on. It's similar to the response pipeline, but in the opposite direction.
You've seen data pipelines before. In express
, request processing is a data pipeline. Every middleware somehow affects or transforms the request until it reaches your handler (controller).
Many of the recommendations I made are applicable to frontend and universal clients too.
The full source code is available on CodeSandbox.
Top comments (0)