DEV Community

chimame
chimame

Posted on

Allow Cloudflare D1 to be used like an O/R mapper with type-safe

Cloudflare D1 (hereafter: D1) was released as an alpha version last year, but it is only an alpha version, so I feel that it does not scratch the itch. It is not only the functionality as a DB, such as transactions, but also the writing style as a program is a bit more complicated. (The main reason for this is that I have actually written SQL.)

The main topic of this article is to try to deal with the writing style of D1, especially SQL and types.

Prerequisites
https://github.com/chimame/workers-remix-d1

  • The complete program used in this article can be found at here.
  • In addition to D1, Cloudflare Pages is also used.
  • The program to deploy to Cloudflare is Remix.
  • I will describe D1 and the programs around it, but I will not describe everything in detail.

Summary.

  • kysely kysely-codegen kysely-d1 allows access to D1 while using SQL Builder.
  • It is even easier to use D1 than the standard DB.prepare(<SQL>).all() because the type is automatically populated.

What is Cloudflare D1?

https://www.cloudflare.com/press-releases/2022/cloudflare-announces-d1-first-integrated-database/

Unlike other databases on the market, Cloudflare D1 will use Cloudflare's global network to optimize a businesses' database by locating it as close as possible to their customers, providing the fastest possible experience to users.

In layman's terms, you can build your DB on the edge (CDN, etc.) and have it quickly accessible from applications running on the edge. Edge computing has the advantage of being close to the client (access source), but when accessing the DB, the program running on the edge will eventually access the DB behind it. In this case, the need for DB access diminishes the advantage of being close to the client. Therefore, it is inevitable that if a program runs on the edge of edge computing, the DB should also be on the edge. D1 is the solution to this problem.

Access program to D1

standard version

The following is a program to access D1.

const data = await env.DB.prepare('SELECT * FROM User').all()
console.log('data.results', data.results)
// #=> [
// { id: <number>, email: <string>, name: <string | null> }
// { id: <number>, email: <string>, name: <string | null> }
// ] type will be unkown[] | unkown
Enter fullscreen mode Exit fullscreen mode

What do you think? I guess what I mean is like this

  • You need to write SQL.
  • You need to specify the type in generics

The "need to write SQL" is a bit troublesome when using O/R mappers such as Prisma or TypeORM, for example. But even if you allow it, the trouble with writing code in TypeScript is that the return value (data.results) does not have a type. In other words, the type cannot be assigned without interpreting the SQL to be executed by prepare. So, in the above code, you need to define the type of the return value yourself with generics in the all function.

const data = await env.DB.prepare('SELECT * FROM User').all<{
  id: number,
  email: string,
  name: string | null
}>()
Enter fullscreen mode Exit fullscreen mode

Currently it's Alpha version, so it could be improved, but I still don't want to write double SQL and type.

Kysely version

There is a TypeScript SQL builder called Kysely.

https://koskimas.github.io/kysely/

Kysely supports PostgreSQL, MySQL, and SQLite as standard. However, Kysely also requires generic type specification for stand-alone use.

As you can see in this example, you define the database table list to the columns by yourself. I would also like to somehow omit this generics specification.

Automatic generation of table and column types

So use Kysely-codegen.

https://github.com/RobinBlomberg/kysely-codegen

As the name suggests, Kysely-codegen generates types for Kysely from the database. For example, for local development using D1, the SQLite database file is located at

<project root>/.wrangler/state/d1/<database Binding name>.sqlite3
Enter fullscreen mode Exit fullscreen mode

So if you use kysely-codegen to do it like this

$ DATABASE_URL=. /.wrangler/state/d1/DB.sqlite3 npx kysely-codegen
Enter fullscreen mode Exit fullscreen mode

type, which makes it very easy to use Kysely with that type.

import type { DB } from 'kysely-codegen'
import { Kysely, SqliteDialect } from 'kysely'
import Database from 'better-sqlite3'

const db = new Kysely<DB>({
  dialect: new SqliteDialect({
    database: new Database('. /.wrangler/state/d1/DB.sqlite3')
  })
})
Enter fullscreen mode Exit fullscreen mode

Using Kysely with D1

Now that the table and column types have been automatically generated, we need a way to use Kysely in D1. So now we will use Kysely-D1.

https://github.com/aidenwallis/kysely-d1

This is an adapter that allows Kysely to handle D1. By the way, there are several adapters for Kysely.

https://github.com/serverless-stack/kysely-data-api

https://github.com/depot/kysely-planetscale

Kysely-D1 allows Kysely to handle D1, and the code looks like this

//const data = await env.DB.prepare('SELECT * FROM User').all<{
// id: number,
// email: string,
// name: string | null
//}>()

import type { DB } from 'kysely-codegen'
import { Kysely } from 'kysely'
import { D1Dialect } from 'kysely-d1'

const client = new Kysely<DB>({ dialect: new D1Dialect({ database: env.DB }) })
const users = await client.selectFrom('User').selectAll().execute()

console.log('users', users)
// #=> [
// { id: <number>, email: <string>, name: <string | null> }
// { id: <number>, email: <string>, name: <string | null> }
// ]
Enter fullscreen mode Exit fullscreen mode

What do you think? Doesn't this make it look a lot more like an O/R mapper? The type completion is also auto-completed, so I think it's easier to use now.

Conclusion

How was it? Whether you use Kysely or Prisma, my feeling is that you are writing SQL, so it is easier to automatically define SQL results as a type.

Of course, D1 is still in its alpha version, so there is a possibility that they will fix these points. If you want to play with D1, please refer to this page.

By the way, I have also made a repository that uses sqldef as a migration tool to make migration as easy as possible, so please take a look at it.

Top comments (0)