DEV Community

Cover image for Embedding SQL into GraphQL without sacrificing type safety
Stefan  🚀
Stefan 🚀

Posted on • Originally published at wundergraph.com

Embedding SQL into GraphQL without sacrificing type safety

How to achieve type safety if the GraphQL response is dynamic,
e.g. if the response shape depends on the query arguments,
and why would anybody in their right mind want to do that in the first place?

This is a small deep dive into the world of GraphQL AST transformations,
and how to build a GraphQL Engine that allows users embed SQL into GraphQL without sacrificing type safety.
Even if you don't care much about GraphQL, or think that embedding SQL into GraphQL is a bad idea,
you might still find this article interesting if you're into reverse engineering or AST transformations.

The final solution to our problem looks like this:

# .wundergraph/operations/me.graphql
query ($email: String!) @fromClaim(name: EMAIL) {
    row: queryRaw(query: "select id,email,name from User where email = ? limit 1", parameters: [$email]) {
        id: Int
        email: String
        name: String
    }
}
Enter fullscreen mode Exit fullscreen mode

This creates a JSON RPC API, accessible on /operations/users/me.
If a user wants to execute it, they need to be authenticated.
We inject their email address into the query via a JWT claim,
execute the raw SQL query and return the result in a type-safe way.

We're also generating a TypeSafe client and models that look like this:

export interface RawsqlQueryRowResponseData {
    row: {
        id: number;
        email: string;
        name: string;
    }[];
}
Enter fullscreen mode Exit fullscreen mode

The Problem: Dynamic GraphQL Responses

The well known way of looking at GraphQL is that someone implements a GraphQL Server,
deploys it somewhere and then clients can send GraphQL Operations to it.
This Server has a pre-defined schema that defines exactly what the client can query.

However, there's also another way of looking at GraphQL.
You can also think of GraphQL as an ORM for APIs in general.
We've got ORMs for databases, so why not for APIs?

At WunderGraph, we've developed a new mental model for APIs.
We think of APIs as Dependencies, just like npm packages,
with GraphQL as the language to interact with our "API Dependencies".
For us, GraphQL is to APIs what an ORM is to a database.

Now let's have a look at the problem we're trying to solve.
WunderGraph uses Prisma internally to generate a GraphQL API for your database.
At first glance, this doesn't seem novel, as there are already a lot of tools that do this, but that's not the point.

If we can translate a number of API Dependencies (Databases, OpenAPI, GraphQL, etc.) into a single unified GraphQL API,
we can then use GraphQL as the ORM for our API Dependencies.

But there's a catch. GraphQL is not really designed to interact with Databases.
There's no way to make the complexity of SQL accessible through an API layer using GraphQL.
If we're simply querying a table and join it with some other tables,
a GraphQL "ORM" works fine, but if we want to do something more complex,
like aggregations, raw SQL queries, etc. we're out of luck, are we?

Well, the title suggests that there's a way to do this, so let's see how we can do this.

The Solution: Embedding SQL into GraphQL

I've heard you like SQL and GraphQL, so I've put some SQL into your GraphQL.

Let's get back to the example from the beginning.
We want to query a user by their email address.

Here's how you'd usually do this with a regular GraphQL API:

  1. Add an authentication middleware to your GraphQL Server that validates the cookie or JWT token
  2. Extract the email address from the JWT token and inject it into the context object of the GraphQL Request
  3. Extend the GraphQL Schema with a me root field on the Query Type that returns a User object
  4. Implement the me field resolver to query the database for the user with the email address from the context object
  5. Create a GraphQL Operation in the client that queries the me field
  6. Run your favorite GraphQL Code Generator to generate a TypeSafe client and models
  7. Use the TypeSafe client to call the previously generated Operation

That's a lot of work compared to writing 7 lines of GraphQL with an embedded SQL Statement.
So, let's see how this works in practice.

Reverse Engineering Prisma to send raw SQL statements

If you download and run the Prisma Query Engine,
you'll see that it allows us to enable a feature called raw queries.
By appending "--enable-raw-queries"
we're able to send raw SQL statements to the Prisma Query Engine over GraphQL.

If we enable the raw queries feature, Prisma will also extend the generated GraphQL Schema with a few new Root Fields.
Here's the gist of it:

type Mutation {
    executeRaw(query: String!, parameters: JSON): JSON
    queryRaw(query: String!, parameters: JSON): JSON
}
Enter fullscreen mode Exit fullscreen mode

As you can see, Prisma adds two new root fields to the Mutation Type,
one for executing raw SQL statements and one for querying raw SQL statements.

There are a few problems with this.
The queryRaw field should actually be on the Query Type, not the Mutation Type.
Both executeRaw and queryRaw take a JSON scalar as the parameters argument,
when in fact, the parameters are just a list of strings.
And last but not least, the return type of both fields is JSON,
which is not very useful if you want to use the result in a type-safe way.
The correct return type for executeRaw is actually Int,
because it returns the number of affected rows.
The correct return type for queryRaw is actually a list of rows,
where each row is a JSON object.

Alright, so we've got a few problems with the Prisma Query Engine.
I'm not blaming Prisma here,
they told me multiple times that their GraphQL API is an implementation detail and only used internally by their client, and of course by me.

Teaching the Prisma Query Engine some real GraphQL

So, how can we make the Prisma Query Engine do what we want?
First, let's add some AST transformations to fix the generated GraphQL Schema.

Here's the updated Schema with the changes we've made:

type Query {
    queryRaw(query: String!, parameters: [String]): [_Row!]!
    queryRawJSON(query: String!, parameters: [String]): JSON
}
type Mutation {
    executeRaw(query: String!, parameters: [String]): Int!
}
type _Row {
    ID: ID!
    Int: Int!
    Float: Float!
    String: String!
    Boolean: Boolean!
    DateTime: DateTime!
    JSON: JSON!
    Object: _Row!
    Array: [_Row!]!
    OptionalID: ID
    OptionalInt: Int
    OptionalFloat: Float
    OptionalString: String
    OptionalBoolean: Boolean
    OptionalDateTime: DateTime
    OptionalJSON: JSON
    OptionalObject: _Row
    OptionalArray: [_Row!]
}
Enter fullscreen mode Exit fullscreen mode

Modifying a GraphQL Schema is a fun task to be honest.
GraphQL tooling is great, making it easy to do the modifications.
In a nutshell, you need to parse the text of the GraphQL Schema into an AST,
and then use a visitor to traverse the AST and modify it.

In our case, we're adding a new Query Type called _Row,
which is used to represent a row in the result set of a raw SQL query.
We then "move" the queryRaw field from the Mutation Type to the Query Type,
and we change the return type of queryRaw to a list of _Row objects.
We also add a new field called queryRawJSON to the Query Type,
which returns the raw JSON result of the raw SQL query.
This field might be useful if you want to use the result of the raw SQL query in a non-GraphQL context.
We've also changed the parameters argument of both queryRaw and executeRaw to a list of strings.
You can find the full implementation here.

How the _Row Type works

You might be wondering how the _Row type works.
Why did we add fields for all the scalar types and why is each field named after the scalar type?
Let me explain the concept behind the _Row type and how it solves our problem of achieving type-safety with raw SQL queries embedded into GraphQL.

Here's a simplified version of our above example to illustrate the concept:

# .wundergraph/operations/me.graphql
{
    row: queryRaw(query: "select id from User where email = jens@wundergraph.com") {
        id: Int
    }
}
Enter fullscreen mode Exit fullscreen mode

With the original GraphQL Schema generated by Prisma,
we wouldn't be able to generate a type-safe client for this Query,
because all we know about the response is that it's a JSON object.
However, the user who wrote the SQL statement knows that the result set contains a single row with a single column called id.
We also know that the id column is of type Int.

So, if we could tell our GraphQL Engine that the result will be an array of objects with a single field called id of type Int,
we could do all the type-checking and code generation for the user.
What the special _Row type does is exactly that.
If the user writes select id, they can use the id alias with the Int field in the Selection Set to define the response mapping.

Changing the Schema of a GraphQL Server will break your client

So far, we've focused on fixing the GraphQL Schema generated by Prisma.
However, by doing so, we've also introduced a few problems.

The GraphQL Operations we'd like to send to our "modified" GraphQL Schema will be incompatible with the Prisma Query Engine.
If we send them as-is to Prisma, we'll get validation errors and the request will fail.
So, we need to find a way to "rewrite" the GraphQL Operations before sending them to Prisma.

Rewriting GraphQL Operations at the Gateway level

Let's see what problems we need to solve to make this work.
To make it easier to understand, let's introduce two terms, downstream Operation and upstream Operation.
The downstream Operation is the GraphQL Operation that uses the modified GraphQL Schema.
The upstream Operation is the one compatible with the Prisma Query Engine.

Transforming ASTs is one of the super-powers of WunderGraph.
Although it's a complex task, you can see from the Pull Request that the required changes are not that big.
What makes WunderGraph so powerful for solving a problem like this is that we split the problem into two parts,
planning and execution.

Most GraphQL Server implementations are not designed for this kind of problem.
They don't have a concept of "planning" and "execution".
All you can do is implement a resolve function for each field in the GraphQL Schema.
However, in our case we cannot know ahead of time which fields and types exist in the GraphQL Schema.
We need to be able to dynamically understand the downstream and upstream Schema,
and then make modifications to the Operation before sending it to the upstream GraphQL Server, in our case the Prisma Query Engine.

If that wasn't enough, we also need to be able to do this in a very efficient way.
As you'll see in the Pull Request
if you're interested in digging into the details,
there are no resolvers in the DataSource implementation,
which by the way is written in Go.

GraphQL AST Transformations at the Gateway level

The solution to the problem is to parse the downstream Schema and Operation as well as the upstream Schema into three ASTs.
Then, we use a visitor to traverse the downstream Operation AST and generate a new upstream Operation AST,
which we then print back to a string and send to the upstream GraphQL Server.

If that sounds complicated to you, this was actually a simplification.
The Prisma Query Engine is not a fully compliant GraphQL Server and we need to do some further modifications to make all of this work.

How to make any GraphQL Query compatible with the Prisma Query Engine

Let's list the problems we need to solve:

  • remove all selections from the _Row type
  • inline all variable definitions into values (Prisma doesn't support variables)
  • render parameters as a JSON string value
  • escape all string values used in the parameters argument
  • rewrite the queryRawJson field to queryRaw
  • rewrite Operations with the queryRaw field to render as mutation instead of query

I will not go into full detail on how we solve all the problems as it would take too long.
However, I'd like to highlight a few interesting parts of the implementation.
If you'd like to look at the full solution, you can find it here.

Removing all selections from the _Row Type

How do we remove all selections from the _Rowtype and subsequent fields?

# .wundergraph/operations/me.graphql
{
    row: queryRaw(query: "select id from User where email = jens@wundergraph.com") {
        id: Int
    }
}
Enter fullscreen mode Exit fullscreen mode

Whenever we "enter" a field, we check if it's a queryRaw field.
If it is, we take a note that we are now "inside" a queryRaw field.
We can trust the depth first traversal of the AST to visit all subsequent fields in a certain order.
So, while we are "inside" a queryRaw field, we can simply ignore all fields.
Once we leave the queryRaw field, we can reset the state and continue as usual.

This way, we will copy all fields and selections from the downstream Operation AST to the upstream Operation AST,
except for the fields inside the queryRaw field.
The main part of this implementation can be found here.

Rewriting GraphQL Operations from Query to Mutation

This one is actually a one-liner.
If we found a queryRaw field, we simply change the Operation Type from Query to Mutation.

Inlining all variable definitions into values

This was one of the biggest challenges I ran into when implementing a wrapper around the Prisma Query Engine.
The Prisma GraphQL Engine doesn't support variables,
so you have to recursively traverse all fields,
find arguments with variable values and replace them with the actual value.

Well, actually it's not that simple.
You cannot replace the variable with a value directly.
You have to build a template that allows you to "render" the value of the variable into the query string at runtime.
This adds another layer of complexity, because you need to make sure that you properly render and escape the value.
E.g. if you need to render an object from the variables JSON into the query string,
you actually have to render it as a GraphQL input object, which is slightly different from a JSON object.

But that's not all.
In case of the parameters the value coming from the variables is an array of strings,
but it needs to be rendered as a JSON string.
The extra complexity in this case is that you have to take into consideration that when we're rendering the value into to template,
we'll have to escape the string TWICE.
That's because we'll be rendering it into a GraphQL query string,
which is part of a JSON, the GraphQL Operation itself that we're sending to the Prisma Query Engine.

Conclusion

As you can see, building and API Gateway that is capable of dynamically rewriting GraphQL Operations in a performant and maintainable way is not a trivial task.

I've been working on the WunderGraph engine for a few years now and have to say that it was crucial to have designed the engine in a way that allows us to make a split between planning and execution phases.
As I wrote graphql-go-tools from scratch,
I was able to implement the AST, Lexer, Parser, Visitor and Printer components in a way that not just enables these use cases,
but also make the whole codebase maintainable and easy to extend.

I think that the power of this engine really lies in the fact that we don't have Resolvers.
Resolvers limit you in many ways, especially when you want to do Ahead of Time (AOT) computing of the resolver process.

I call this concept "Thunk-based Resolvers".
The idea is that you don't implement resolvers,
but instead you implement a factory that generates a function that can be called at a later point in time.

When the WunderGraph engine is done with the planning phase,
the artifact is an execution plan that describes how to construct a JSON response from one or multiple data sources.
These data sources could be anything, from a REST API to a SQL database.
We traverse the AST, and use the meta-data we have about the data sources to construct a plan that describes how to fetch the data from the data sources and what fields to map from the data source response to the GraphQL response.

This plan is cacheable, so we can cache the plan for a given GraphQL Operation and reuse it for subsequent requests.
At runtime, all we do is take the variables from the request, "render" them into the templates within the plan and execute it.

Regarding the solution,
I think it's a good example of how you can leverage GraphQL to collapse a complex problem into a few lines of code.

If you found this topic interesting, you might want to follow me on Twitter or join our Discord Community.

Top comments (0)