DEV Community

Vladimir Novick
Vladimir Novick

Posted on • Updated on

Effortless Real-time GraphQL API with serverless business logic running in any cloud

In this post, I will show you how you can create effortless real-time GraphQL API with serverless business logic and deploy it to any cloud. Sounds a bit like a click-bait title right? What is effortless anyway? Obviously, some effort is involved. You may assume that we will create your own GraphQL server if you are familiar with GraphQL or you have heard about it and always wondered, what it is and how do I start writing GraphQL servers. Also, you may assume that we will be dealing with cloud deployments, serverless functions. In a nutshell - complex stuff.

Well, Effortless is the key word here. It is super simple to set up and run your own GraphQL API in any cloud of your choice on top of existing Postgres or it's extensions. And no we won't be setting up servers and talking about cloud deployments. Well, maybe a tiny bit. In this blog post, I will explain most of the feature set of Hasura.io open source engine and how it brings you Real-time GraphQL API without creating your own server. We will see how to navigate through its features to give you an in-depth overview and use cases for using it even with your existing server, non-Postgres database or serverless functions.

Table of Contents

GraphQL intro

  • What is this engine and why it's open source
  • Let's get started on Heroku
  • What about other clouds?
  • Running on top of existing Postgres
  • What about Postgres extensions (PostGIS, TimescaleDB)
  • And what if I don't use Postgres?
  • Hasura engine console overview
  • Data modeling, relationships, and access control
  • Authentication
  • Custom external GraphQL server aka Remote Schemas
  • Async serverless business logic with Event Triggers

GraphQL

GraphQL is not only a buzzword but widely adopted way to interact with servers which is gradually replacing REST API. In a nutshell, GraphQL is a query language for your API and specific type system in which you define your data. After you defined your data shape as well as how to retrieve this data on the server, on the client you can query, change or even subscribe to data changes by using specific query format. The data that you will receive will be in the exact same shape as you've requested. In this blog post I won't go in depth describing what GraphQL is, but if you're new to GraphQL, I'm doing a free 4 day Bootcamp that you should totally subscribe to! 
We will cover how to use an existing GraphQL API in React, Angular or Vue and also learn how to create our own GraphQL API in NodeJS.

As you may assume from the title of this post, here we will be talking about effortless real-time GraphQL API. Sounds like magic, right? Well, it certainly feels so with hasura.io open source GraphQL engine! Let's dive in!

What is this engine and why it's open source

Hasura engine gives you an engine running in docker container and exists as a layer on top of new or existing Postgres database.

Yeah, that's right. You not only can create your GraphQL API with ease from scratch but run the engine on top of your existing Postgres Database.

Since it's running in Docker container, you basically can run it everywhere where docker can run, meaning, on Heroku, Digital Ocean, AWS, Azure, Zeit, GCP or even your local environment.

Hasura engine comes with slick UI from which you can test your queries, mutations or subscriptions by using GraphiQL tool extended with various cool addons that we will overview in a bit
So why it's open source? That was a question that I asked Tanmai Gopal - co-founder of hasura.io.

Because it's a part of your stack. And in today's age you need the transparency and flexibility of an open-source component. Open-source makes it easy to migrate into and away from, improve security, flexibility to extend and features you'd like to see, engage with a community. Communities also help you ensure that your open-source product can run in different environments. Hasura is multi-cloud and multi-platform with the help of our community running it in their favourite environments and contributing information back to the project.

Let's get started on Heroku

The first thing that you can see when you go to hasura.io is Getting started with Heroku as a free option.

front-page

This is a really quick and pretty solid setup, but for the production-grade application, you should consider using a different cloud. As you can see you can choose from the various cloud options out there, but for the sake of simplicity though let's get started with the basic setup on Heroku free tier.

Deploy

The guide will show you the magic button. When you click it the engine is deployed to Heroku along with Heroku Postgres addon.

So what is happening here?

Heroku has a concept of templates which you can deploy. So when we are running deploy to Heroku what is happening is that we are hitting this link.

https://heroku.com/deploy?template=https://github.com/hasura/graphql-engine-heroku

In fact, the template that we are deploying is:

https://github.com/hasura/graphql-engine-heroku

Getting deeper

Let's check this template app.json file (file where Heroku configuration is defined)

app.json

This json file tells Heroku to deploy web formation in free tier size with Postgres addon.

Along with app.json we also have heroku.yml which is as simple as:

dockerheroku

It's all about Docker!

As you can see we specify that there is a Dockerfile to run. So a takeaway is that deployment to Heroku is basically a syntax sugar around deploying docker container to Heroku. For Digital Ocean, one-click deploy it's a bit different but conforms to the same idea.

Digital ocean image is just an Ubuntu + docker + Postgres already set up

AWS and Azure are a bit more tricky to set up, but the idea is the same - running engine in Docker container and connecting it to Postgres db.

What about other clouds?

So as you've probably figured out, running hasura.io engine is possible everywhere where you can run Docker and Postgres. AWS, Azure, Zeit, GCP, you name it.

Let's for example setup Hasura on local environment.

Prerequisites

Before installing engine locally you will need Docker and Docker Compose which you can install from here:

Getting a manifest

Now let's get docker-compose file from the following repo:

https://github.com/hasura/graphql-engine/tree/master/install-manifests

This repo contains various installation manifests required to deploy Hasura anywhere. 

So to get it, create a new directory and run

wget https://raw.githubusercontent.com/hasura/graphql-engine/master/install-manifests/docker-compose/docker-compose.yaml
Enter fullscreen mode Exit fullscreen mode

Running a docker container

docker-compose up -d
Enter fullscreen mode Exit fullscreen mode

Now check if containers are running:

docker ps
Enter fullscreen mode Exit fullscreen mode

You suppose to get something like this

containers

As you can see engine instance is running along with Postgres db.

Now the only thing that is left is to go to http://localhost:8080/console to see

Console

Running on top of existing Postgres

It's possible also to run Hasura engine on top of an existing Postgres database. For that, instead of getting docker-compose as we did previously, we get docker-run.sh script from install-manifests repo and edit HASURA_GRAPHQL_DATABASE_URL variable. You can read more about it here

What about Postgres extensions

Totally possible. there are a couple of awesome blog posts about using PostGIS (spatial database extender for Postgres) with Hasura like this one

Or using Hasura with TimeScaleDB (open source time-series database with full SQL support) like described in this blog post

And what if I don't use Postgres?

Running on Firebase? not a problem check out firebase2graphql tool. Using mongo or any other NoSQL database? You can export JSON dump and use json2graphql tool to import your data into Postgres database through Hasura engine. Using MySQL? not a problem. You can use https://www.symmetricds.org/ to migrate from MySQL to Postgres or even use Postgres FDW to make Postgres a proxy for data in MySQL

Console overview

So now we know that we can run engine locally or on any cloud of our choice and not only on top of any Postgres database but also on top of Postgres extensions such as PostGIS or open source DB such as TimeScaleDB. But we haven't talked yet about capabilities of an engine. So let's get back to the console that we've already seen when running engine locally.
The console has 4 main tabs

GraphiQL

Endpoint and headers

On top of this page, you will find your API endpoint as well as Request Headers that you need to provide if you want to access GraphQL API from the client.

headers

As you can see in our newly created example we have only Content-Type header which is not really secure since everyone can access our API. You see a notification about it on top right corner "Secure your endpoint" that will lead you to docs explaining how to secure your endpoint.
Here is a different example of API with secured access:

xaccess

Here you can see that we have X-Hasura-Access-Key header that secures our endpoint. It will also secure access to our console.

GraphiQL IDE with extensions

GraphiQL tab has embedded GraphiQL IDE that gives you an ability to run queries/mutations or subscriptions to test your GraphQL API from the comfort of your browser. You also can explore docs of your GraphQL schema to see the shape of data and which queries, mutations and subscriptions you can execute. Hasura engine adds additional features on top of GraphiQL.

graphiql

  • Prettify - will prettify GraphQL syntax in the left pane
  • History - will show last executed queries/mutations
  • Analyze - This is a really awesome tool. Hasura engine does not run resolvers to fetch the data but actually compiles your GraphQL queries to SQL queries. Analyze button shows you how it's compiled.

consider the following query:

{
  posts {
    author {
      user {
        name
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

When clicking on the Analyze button you will get:

analyze

Here you can analyze how your queries are executed against the database and give an indication to yourself or your DBA how to optimize database relations to be more efficient.

Data

This tab is sort of admin for your Postgres database. Here you can define your schema structure, tables relations, set roles and permissions and even can run your own custom SQL. We will explore Data modeling in the next section.

Remote schemas

Remote schemas tab is a tab where you can specify URL of your custom GraphQL server for your custom business logic.

remote schemas

Hasura engine will do schema stitching between your hasura GraphQL API and your custom GraphQL server. So for example, if you are thinking about doing custom business logic before adding something to the database, you can write a mutation on your own GraphQL server, provide hasura engine with server URL together with additional headers for security and engine will stitch schemas.

Events

Hasura engine uses powerful eventing system. Whenever anything is inserted, updated or deleted from the database, an event can be triggered. It's advised to connect events to serverless functions. We will talk more about it in the next section

Data modeling

Now after we've seen how to navigate the console let's dive a bit deeper into Data tab and understand how we can model our data as well as set table relationships

Creating or Modifying tables

when we access our Data tab we will have an option to create tables. When creating a table you need to specify it's columns as well as types.

modifying

Hasura also gives you helper functions. In our case, it's gen_random_uuid() for auto-generation of the unique identifier for post id primary key. Here you need to select your primary key column or several columns.

As in any database admin, you are able to set foreign key mapping to a different table as in the following example we map authorId in posts table to an id column of authors table:

foreign

As you can also see from this example, whenever we already have tables we can modify them, browse rows, insert rows or add relationships.

Autogenerated queries

The cool part is that whenever we add a table, we can access the following queries/subscriptions on our table:

query

As you can see they are pretty powerful one. We not only can query or subscribe to our data but also order it as well as filter it. And of course, you have delete/insert/update mutations.

Relationship builder

By accessing the Relationship tab we will be able to build two types of relationships between tables:

  • Object relationships
  • Array relationships

When we have a foreign key set up, for example in our use case posts table authorId column points to id in authors table, we can query posts, but we won't be able to get nested data from the authors table as we would expect in GraphQL. For that, we need to set Object relationship

object rel

It will be also suggested in various parts of UI to create it. So you can just click Add in suggested object relationships or you can create relationship manually.
Whenever you do that, you will be able to run a query like this:

{
  posts {
    id
    author {
      bio
    }
  }
}

Enter fullscreen mode Exit fullscreen mode

Permissions

In Hasura engine we can define roles and permissions and get to a really granular level. We can, for example, allow access to a specific column only if a specific rule is met. We also can pass variables to from our custom authentication webhook and define custom access based on it. In console it looks like this:

rel

In this example, we check if provided X-HASURA-USER-ID

Authentication

Hasura engine supports various types of authentication. You can use JWT Token, your custom token or Hasura-access-key. What happens under the hood is the following: Authorization layer checks for secret key token/JWT config or webhook config. 

Let's see Heroku example:

herokusample

Here you see the environment variables set up in Heroku dashboard.

  • HASURA_GRAPHQL_ACCESS_KEY - secret key token
  • HASURA_GRAPHQL_AUTH_WEBHOOK - url of your custom authorization provider
  • HASURA_GRAPHQL_JWT_SECRET - JWT config

For example, if we use HASURA_GRAPHQL_ACCESS_KEY then we need to provide the X-Hasura-Access-Key to be able to access API or console

auth

You can read more about different authentication options here

Custom external GraphQL server aka Remote Schemas

So what you will use Remote schemas for? Let's think about the following example. Let's say you want to insert a row in a database based on some custom server validation, but you still want to have a subscription to database changes. In that case, you can create your remote schema by either writing it yourself or using one of hasura boilerplates, run it on a server of your choice and connect it by providing your custom server Graphql endpoint URL.

remote

On your server, let's say you have a mutation defined which resolver runs some custom logic before inserting a row into the same database, a hasura engine is connected to. So what will happen when data is inserted? GraphQL subscription from hasura engine will run as expected.

Async serverless business logic with Event Triggers

As described above hasura has a powerful concept of events. Events can be triggered not only on table operations but on column changes. Whenever an event is triggered, event data is passed to webhook URL. These webhooks are advised to be serverless functions. You can check these boilerplates for creating your serverless functions.

serverless

Summary

As you can see from this overview, hasura.io platform is really flexible, can run almost anywhere and has lots of capabilities that help you create your GraphQL API in any level of complexity and almost effortless. Also, Hasura is open source and written in Haskell and JavaScript, so all contributions are welcome. You also welcome to Join Hasura on Discord or Follow on Twitter

Top comments (4)

Collapse
 
naster01 profile image
Adam Paquette

What about custom logic before saving or inserting data with mutations? I would like to have a default Hasura endpoint for all normal querying stuff but the possibility to add custom mutations that I control the flow in C# or NodeJS.

Collapse
 
vladimirnovick profile image
Vladimir Novick

You definitely can do that by using custom resolvers using remote schemas, writing custom sql functions that will generate mutations and use event triggers to trigger serverless functions to execute your custom logic on dB manipulations leveraging 3factor.app architecture. I will soon write blog post how exactly you can do all of these

Collapse
 
vladimirnovick profile image
Vladimir Novick

I promised a blog post so here you go dev.to/vladimirnovick/different-wa...

Collapse
 
bulletninja profile image
Bulletninja • Edited

Great!
A tutorial on how to use Postgres FDW to make Postgres a proxy for data in MySQL would be a great addition!