Several years ago, we performed a big rewrite of our back-end codebase (now better known as The Great Migration), from a JavaScript monolith into a central GraphQL API written in TypeScript connected to a few micro services.
Since that time, we’ve continuously been upping our TypeScript game to help us guide development. One area that we’ve recently put effort into is configuring type generation for our API schemas.
Our main motivation for generating the types from our API schemas was removing ambiguity. Often in large or complex JavaScript codebases, unless you just wrote a piece of code yourself, you don’t know the exact content of a variable until inspecting it at runtime.
For a long time however, JavaScript’s ambiguity remained in our TypeScript codebase wherever data was coming in or out through an API. The types assigned for that data needed to be manually crafted and maintained whenever the API schema was updated, which is an error-prone process.
In this blogpost, we’re sharing our experience, learnings and desired improvements in generating TypeScript types for our NodeJS GraphQL API server.
Type generation for GraphQL servers
There are two main approaches to keeping the types of the GraphQL schema and entities in business logic in sync. You can generate the schema based on your TypeScript code (e.g. TypeGraphQL), or you can generate types based on your schema (e.g. GraphQL Code Generator). We opted for the latter since it slotted right into our existing GraphQL server implementation using Apollo Server.
The problem
Apollo Server does not provide type generation for resolvers out of the box. Therefore, in our initial implementation we just manually wrote type definitions that reflected the types in the schema. The schema types are enriched with extra fields which we write resolvers for:
// src/customers/customer.ts
export interface ICustomer {
id: string;
email: string;
firstName: string;
lastName: string;
}
// src/customers/schema.ts
const customerTypeDef = /* GraphQL */ `
type Customer implements User {
id: ID!
firstName: String!
lastName: String!
houses: [House!]! # <- A resolved field
}
`;
// src/customers/resolvers.ts
import { IObjectTypeResolver } from '@graphql-tools/utils';
import { ICustomer } from './customer';
type BaseResolverObject<T> = IObjectTypeResolver<T, ResolverContext, any>;
const customerResolvers: BaseResolverObject<ICustomer> = {
async houses(customer, _, { context }): Promise<House[]> {
return CustomerService.findHouses(context, customer.id);
},
};
For simple resolvers that perform a look-up (like houses
), this is not a problem. But with code that evolves fast, and types that are nearly identical to each other, mismatches between the schema and TypeScript definition will occasionally occur:
// src/customers/schema.ts
const mutationTypeDefs = /* GraphQL */ `
input CustomerRegistrationInput {
firstName: String!
lastName: String!
email: String!
phone: String
password: String!
}
extend type Mutation {
registerCustomer(customer: CustomerRegistrationInput!): Customer!
}
`;
// src/customers/resolvers.ts
const mutations: BaseResolverObject<never> = {
async registerCustomer(
_,
{ customer }: { customer: ICustomer & { password: string } },
{ context },
): Promise<Customer> {
return CustomerService.register(context, customer),
},
};
In cases like this one, it wasn’t uncommon for us to reuse existing type definitions (ICustomer
) for the schema’s input types (CustomerRegistrationInput
). These usually overlap sufficiently to work with, but more than often contain fields that don't exist on the actual input type or differ in terms of nullability.
The solution
GraphQL Code Generator can eliminate this ambiguity between the GraphQL Schema and TypeScript by generating the required TypeScript types from the GraphQL schema. With a relatively simple configuration file, the resolvers in the example above can be transformed into the following:
// src/customers/resolvers.ts with type generation
import { MutationResolvers } from '../generated/graphql';
const mutations: MutationResolvers = {
async registerCustomer(_, { customer }, { context }): Promise<Customer> {
return CustomerService.register(context, customer),
},
};
The configuration file allows setting up a mapping between the type definitions of our own entities and the types generated through the schema. This allows us to receive and return our own types in the resolvers, keeping our business logic and API layer cleanly separated and reducing code duplication.
# codegen.yml
# Configuration documentation:
# https://www.graphql-code-generator.com/docs/plugins/typescript
# https://www.graphql-code-generator.com/docs/plugins/typescript-resolvers
schema: ${SCHEMA_PATH:http://localhost:4000/graphql}
documents: null
generates:
./src/generated/graphql.ts:
plugins:
- 'typescript'
- 'typescript-resolvers'
- add:
content:
- import { FileUpload } from 'graphql-upload';
config:
# Needed for apollo-server compatibility
useIndexSignature: true
contextType: ../BaseResolver#ResolverContext
# Needed for our mixed use of null and undefined corresponding to nullable values in GraphQL
maybeValue: T | null | undefined
# Linking enums inserted into the schema back to their definition in TypeScript, so they don’t get re-generated
enumValues:
UserType: '../users/User#UserType'
# Custom mapping to model types
# https://www.graphql-code-generator.com/docs/plugins/typescript-resolvers#use-your-model-types-mappers
mapperTypeSuffix: Model
showUnusedMappers: true
mappers:
Customer: ../users/Customer#Customer
Once this configuration has been set up, any time after updating the schema, the types can be re-generated and the TypeScript compiler will tell you which resolvers need to be updated. By integrating this in the CI/CD pipeline too, the stability of the resolvers is improved greatly.
The configuration of GraphQL Code Generator is very flexible and is continuously being improved. For example, since type names in the schema and code usually overlap, the mapperTypeSuffix option was introduced to automatically redefine the mapped types in the generated code to append a keyword such as “Model”. Similarly for enums, since two identical enum definitions may not be compared in TypeScript, the configuration allows you to map the generated enums in the schema to those defined in your own code.
Benefits
Overall, it’s pretty great. Compared to our previous way of working, it prevents errors, reduces risk and saves us time by eliminating the error-prone process of writing and maintaining types manually. It also reduces code duplication and prevents mismatches in the schema versus the TypeScript types that represent the schema.
An unexpected benefit in our case is that it also enforces the schema to be properly defined. Some schema types that were set up in the very beginning of our use of GraphQL had all fields defined as nullable, and in rare cases fields we expected to be there were completely missing.
Issues we ran into
While the type generation setup we ended up with is really useful overall, it does leave some things to be desired:
The Maybe type
A major gripe we experience is due to GraphQL having no distinction between null and undefined: fields and arguments can just be nullable, which appear as Maybe<T>
in the generated types. While null and undefined are similar, they are not interchangeable in JavaScript or TypeScript. Undefined indicates the lack of a value, while null indicates the existence of a value; just that it is null.
Although it is possible to define the Maybe
type to just be null
or undefined
, both of them are needed in our current way of working. We expect nullable values in the schema on input types to be optional in TypeScript, meaning possibly undefined. However, we sometimes define an entity’s fields as possibly being null in TypeScript, which causes trouble when returning it in a resolver. Our usage of null comes from the decision to align with its use of our database client (MongoDB) for the ability to un-set the value of a field. A Partial
type of an entity can be passed in an update method, where undefined
values remain as-is and only fields explicitly set to null
will be un-set.
interface IProduct {
id: string;
created: Date;
/** Can be un-archived by setting this value to null */
archived: Date | null;
}
...
async unArchiveProduct(id: string) {
this.productRepository.updateOne(id, { archived: null });
}
For the moment, our Maybe type remains possibly both null
or undefined
. For input types, we usually strip off all null types using a DeepNonNullable
type-cast, since it is not present on the vast majority of our entities.
Alternatively for our situation, null
from the Maybe
type could be omitted by dealing with the requirement of un-setting fields in the database in another way, such as setting up separate mutations for that purpose and moving the responsibility to the database layer.
type _DeepNonNullableArray<T> = Array<DeepNonNullable<NonNullable<T>>>;
type _DeepNonNullableObject<T> = {
[P in keyof T]-?: DeepNonNullable<NonNullable<T[P]>>;
};
export type DeepNonNullable<T> = T extends any[]
? _DeepNonNullableArray<T[number]>
: T extends Record<string, unknown>
? _DeepNonNullableObject<T>
: NonNullable<T>;
Keeping generated types separate from business logic
Quickly after introducing generated types from our GraphQL schema, they started leaking into our business logic. Sometimes by accident, due to not paying attention to the automatic import functionality, and sometimes intentionally, to not having to bother with defining TypeScript definitions ourselves for trivial cases in our business logic.
After a while however, cryptic build errors started appearing. When a type that is mapped to in the generated file depends on a generated type, it creates an import cycle. These are notoriously hard to solve, and can appear out of nowhere after re-organizing imports of seemingly unrelated files as to what is causing the cycle. Therefore, we chose to only make use of the generated types in our resolvers: our business logic should remain free of them. To enforce this decision, a lint-rule was set up using the import/no-restricted-paths ESLint plugin:
"import/no-restricted-paths": [
"error",
{
"zones": [
{
"target": "src/**/!(resolvers).ts",
"from": "src/**/generated/graphql.ts",
"message": "Generated GraphQL types may only be used in resolvers"
}
]
}
]
Along with our rule to not define business logic in resolvers, this reduces the potential of the generated types somewhat. Though, it does help to separate our API from the other layers in our codebase.
Usually the generated type and its match in the business logic are almost identical, so we can perform a type cast. TypeScript will warn us if there is a mismatch. Writing adapters to perform this task was considered, but disregarded to avoid introducing unneeded overhead. For now, input types are usually converted into our own types directly in the resolvers.
Mapped paths are not validated
A small nitpick: When updating the mapping of entities or enums in the codegen configuration, no warning or error is given when a mistake is made in a path or name of a type when generating the types. The need for updating the configuration is infrequent, so it’s only a small annoyance. For now, we’re relying on the TypeScript compiler to tell us about it in the build process.
Codegen configuration examples
While the documentation of GraphQL Codegen is decent, there aren’t many complex configuration examples to be found. We have posted ours earlier in this article. We based ours on threads like this one. We'd be interested to see more examples, best practices and tips 'n tricks.
Wrap up
Type generation has become one of the techniques we rely most on nowadays, because:
- It saves time in maintaining two type definitions that should be identical
- It prevents the error-prone process of maintaining mappings of one type definition to another
- It reduces easily missed runtime errors by eliminating ambiguity
- It helps to avoid introducing breaking schema changes as It enforces correct use of the schema
- Writing types yourself is for mugs
Do you like GraphQL? Are you looking for work? Reach out! https://energiebespaarders.recruitee.com/
Top comments (0)