In our introduction to Absinthe, we covered the basics of Absinthe and GraphQL for an Elixir application.
Now we'll dig deeper and see how we can customize Absinthe for larger-scale Elixir applications.
Let's get going!
Defining Resolvers for a Large Elixir App
We've seen that creating a query and returning a result in Absinthe is really easy. But if you have big resolution logic, the schema can soon get very heavy. It's common practice to define resolver functions in a separate module for large apps using Absinthe.
First, create a module that defines functions acting as field resolvers:
# lib/my_app_web/schema/resolvers/blog.ex
defmodule MyAppWeb.Schema.Resolvers.Blog do
def post(%{id: post_id}, _resolution) do
# complex resolution logic here
# post = ...
{:ok, post}
end
end
Then, update your schema to use this resolver:
defmodule MyAppWeb.Schema do
use Absinthe.Schema
query do
field :post, :post do
arg :id, non_null(:id)
resolve &MyAppWeb.Schema.Resolvers.Blog.post/2
end
end
end
This also allows you to unit-test that resolution logic separately from your GraphQL integration tests. We will discuss more about testing schemas later on in this series.
The resolvers additionally help you to reduce code duplication at several places in your schema. For example, let's say you need to expose a field under a different name from what is defined on the struct.
We can build the object like this:
object :post do
# ...
field :published_at, :datetime do
resolve fn %Post{} = post, _args, _resolution ->
{:ok, post.inserted_at}
end
end
end
If you find yourself doing that at several places in the schema, it might be better to create a resolver:
defmodule MyAppWeb.Schema.Resolvers.Base do
@doc """
Returns a resolver function that returns the value at `field` from `parent`
"""
def alias(field) do
fn parent, _args, _resolution -> {:ok, Map.get(parent, field)} end
end
end
And then use it in your object like this:
object :post do
# ...
field :published_at, :datetime, resolve: Resolvers.Base.alias(:inserted_at)
end
Avoiding N+1 Queries in Elixir
Let’s get back to our schema, but with a query that returns a list of posts:
defmodule MyAppWeb.Schema do
use Absinthe.Schema
object :post do
# ...
field :author, non_null(:author),
resolve: &Resolvers.Blog.post_author/3
end
query do
field :posts, non_null(list_of(:post)),
resolve: &Resolvers.Blog.list_posts/2
end
end
Here is the full Resolvers.Blog
if you are interested.
Now, perform a query that includes the author of each post:
query {
posts {
id
author {
id
firstName
}
}
}
When this query is executed, we first fetch all posts from the database. Then we fetch the author by their id for each post — that's not very efficient.
But that’s an easy problem to solve. We can just preload the authors in Resolvers.Blog.list_posts/2
.
However, what if someone makes a query that doesn’t need an author? We will still be unnecessarily fetching all authors, defeating the whole purpose of having a composable query like this.
One possible solution is to be smart when resolving the initial list of posts and load authors only when the user has selected the author
field in the query. If you remember, we get an Absinthe.Resolution
struct as the last argument to the resolver function.
resolution.definition.selections
contains all the selected fields (Absinthe.Blueprint.Document.Field
structs).
We can check if the author
was selected here, and preload it right there:
defmodule Resolvers.Blog do
def list_posts(%{}, %Absinthe.Resolution{} = res) do
posts = Blog.list_posts()
res.definition.selections
|> Enum.any?(&(&1.name == "author"))
|> if do
{:ok, Repo.preload(posts, :author)}
else
{:ok, posts}
end
end
end
That works, but if you have many fields (or several nested levels), it can soon become too cumbersome. This is where dataloader
can help.
Dataloader to the Rescue!
Dataloader provides an efficient way to load linked data by batching queries. It does this by first performing a full pass through the query resolution phase, collecting all the ids of objects to be loaded. Dataloader then loads them all in one go instead of making a request to the database for each of them separately.
It’s a separate dependency, so first add it to your mix.exs
inside deps
.
You then need a one-time setup in your schema:
defmodule MyAppWeb.Schema do
use Absinthe.Schema
# ... schema definition here ...
# Add dataloader to the context
def context(ctx) do
loader =
Dataloader.new
|> Dataloader.add_source(Blog, Blog.data())
# ... add other sources here ...
Map.put(ctx, :loader, loader)
end
# Add dataloader to the plugins
def plugins do
[Absinthe.Middleware.Dataloader] ++ Absinthe.Plugin.defaults()
end
end
You can add custom fields to the context through the context
method in the schema. The context's value is available to all steps in the GraphQL request cycle (e.g., inside resolvers or middleware) inside the %Absinthe.Resolution{}
struct. Context is also where we usually store user details if we authenticate users.
Check out the Absinthe Context and Authentication guide to explore this further.
In addition, we add dataloader
to plugins
using the plugins/0
callback on the schema. This allows dataloader
to hook into the resolution pipeline. If you want to learn more about plugins, read the Writing Middleware and Plugins guide for Absinthe.
DataLoader.add_source/3
expects the name of the data source as its second argument and a module implementing the Dataloader.Source
protocol. Dataloader supports an Ecto-based data source out of the box, which is what we will need for our example.
Let's update our Phoenix context (Blog
in our example) to return the Ecto data source from data/0
:
# my_app/blog.ex
defmodule MyApp.Blog do
def data(), do: Dataloader.Ecto.new(MyApp.Repo, query: &query/2)
def query(queryable, _params), do: queryable
end
It isn't really documented very well, but if you are feeling adventurous, most of the data-loading magic lies inside the Ecto data source.
It uses the query/2
function we passed to generate a base query when it needs to load some data.
For example, if we try to load Author
with ids 1 through 5, it will make a single query like from a in Author, where a.id in [1, 2, 3, 4, 5]
, instead of making 5 different queries.
This function is our opportunity to filter results and return a query that will finally be used to fetch the items.
For now, we just return the queryable as it is, which means that we don’t need any special filtering.
Use Dataloader Inside the Absinthe Schema
To use Dataloader inside our schema, we must now modify our object post
to use the dataloader
helper.
import Absinthe.Resolution.Helpers, only: [dataloader: 2]
object :post do
# ... other fields
field :author, non_null(:author),
resolve: dataloader(MyApp.Blog, :author)
end
end
- The first argument to
dataloader/2
is the source name (as registered in the schema). - The next argument is the name of the field in the parent object (
author
field in parent objectpost
).
Note that the data source should be the one that will resolve the field. So if the post belongs to an author with type MyApp.Accounts.User
, you must use dataloader(MyApp.Accounts, :author)
as the resolver and support data/0
and query/2
inside the Accounts
context.
Here is the full code if you are interested. I know it's a lot to take in, so let's go through the execution of the query above.
Absinthe first invokes our posts resolver (Resolvers.Blog.list_posts/2
), and returns the list of posts. Absinthe then checks for the fields it needs inside post
and encounters a selection for author
.
This is where dataloader takes over:
- It collects the
author_id
for all posts that will be returned in our result. Let's say we need to load authors[1, 2, 3, 4, 5]
. - It then calls
MyApp.Blog.query(Author, %{})
to get the initial query. In our example, we are simply returningAuthor
(but in a real application, this could be filtered by business case — for example, if we need only authors that have an active account, we could returnwhere(queryable, [a], a.active)
, instead of just returningqueryable
). - Finally, it loads the required ids from the above query —
from a in Author, where a.id in [1, 2, 3, 4, 5]
.
As you can see, we only performed a single query instead of 5 different ones.
Nesting also works out of the box, so if each author has an organization
field and we select that in the query, Dataloader will load all organizations in one batch.
Organizing Your Absinthe Schema with Imports
As your schema starts growing, you will soon notice that putting all type definitions and query fields in the same file is not sensible. This is where import_types
and import_fields
come into play.
The level to split at depends on the size of your API and your application, but it is a common practice to split by business context (the same as your Phoenix contexts).
Here is a structure that works well.
- Create a module that contains queries (and another for mutations) related to each model:
# lib/my_app_web/schema/types/blog/post/queries.ex
defmodule MyAppWeb.Schema.Types.Blog.Post.Queries do
use Absinthe.Schema.Notation
object :post_queries do
field :posts, list_of(:post), resolve: Resolvers.Blog.posts/2
# ... all queries related to post here
end
end
- Create a module for types related to each model. Also import the query and mutation types here.
# lib/my_app_web/schema/types/blog/post.ex
defmodule MyAppWeb.Schema.Types.Blog.Post do
use Absinthe.Schema.Notation
import_types(MyAppWeb.Schema.Types.Blog.Post.Queries)
import_types(MyAppWeb.Schema.Types.Blog.Post.Mutations)
object :post do
field :title, not_null(:string)
# ...
end
# all types related to blog here
end
- Create a module for types related to each context. This should only import the types from model-specific modules.
# lib/my_app_web/schema/types/blog.ex
defmodule MyAppWeb.Schema.Types.Blog do
use Absinthe.Schema.Notation
alias MyAppWeb.Schema.Types
import_types(Types.Blog.Post)
# ... import all types related to blog here
object :blog_queries do
import_fields(:post_queries)
# ... import all queries related to blog here
end
object :blog_mutations do
import_fields(:post_mutations)
# ... import all queries related to blog here
end
end
- Finally, import the context-specific types to your schema.
# lib/my_app_web/schema.ex
defmodule MyAppWeb.Schema do
use Absinthe.Schema
import_types(MyAppWeb.Schema.Types.Blog)
query do
import_fields :blog_queries
end
mutation do
import_fields :blog_mutations
end
end
This way, your schema remains very clear, declaring only what it imports. All specific queries are further down the pipeline.
This may seem like overkill for our small API example.
But we have been using it in production for a large app with several contexts, and it’s been a boon to keep our schema manageable.
Wrap Up
In this post, the second part of our series on Absinthe, we customized Absinthe for an Elixir application pushing a lot of data. We started by defining resolvers for a big Elixir application and covered how to avoid N+1 queries.
Finally, we dived into Dataloader (which helps to load linked data) in some detail and explored how to organize our Absinthe schema.
Next up, we'll look at creating mutations and subscriptions with Absinthe.
Until then, happy coding!
P.S. If you'd like to read Elixir Alchemy posts as soon as they get off the press, subscribe to our Elixir Alchemy newsletter and never miss a single post!
Top comments (0)