It was the best of times, it was the worst of times. It was the age of wisdom, it was the age of foolishness. And I need a database for my front end.
There are many solutions out there for this exact problem. Having worked with SQL databases for most of my career, I am very comfortable with relational databases, SQL scripts, PL/SQL, all that jazz. But the world of front end development is relatively new to me.
Currently I am developing a web application that will change the world! t the very least it is changing my perception of the world, and helping me learn the ins and outs of VueJS, Nuxt, Netlify functions, and now, databases for the front end!
Having recently done some prototyping with Netlify functions, I am really excited at the prospects of building out lambda functions using Node.js, to provide my frontend with the data it needs.
Perhaps suffering from shiny ball syndrome, I dove straight into FaunaDB. Initial prototyping did not even need netlify functions. I was able to get up and running with a noSQL database and GraphQL interface in one fell swoop. It really was quite a simple process, and the performance of my graphQL queries were fast, sub 100ms repeatedly. However, I quickly ran into some constraints, that I was not well equipped to overcome quickly.
My history of solving problems with SQL left me struggling with GraphQL as a way to access data. . I am used to being able to shape my data in an SQL script, and filter, manipulate, all in the one place. Working with GraphQL data I was really restricted with this way of thinking. Sticking with FaunaDB, I turned to the native Fauna Query Language. This was completely foreign to me, but I felt if I spent some time trying to learn it, I could really benefit.
FQL is a procedural language, while SQL is declarative. The procedural nature of FQL means that all the complexities of executing a query are front and centre when your write your very first query. Keen to learn though, I gave myself a weekend. My first target was to convert at simple SQL query, joining two tables(aka collections), and restrict the data based on a parameters in each table. In SQL, this is a 20 second query to write. This is something that cannot done in a GraphQL query, and as I found out, is not straight forward in FaunaDB either.
To do so, you have to create indexes on the data fields you want to query on, and the resulting FQL is really foreign to me. So foreign to me, I could not get my head around it in a weekend. At this point, I realised I was not yet ready for a dive into noSQL databases. They were really going to slow me down.
In the future, when I do have the time and/or the need for a noSQL database, I think I will start with dynamodbbook. There is a great podcast interview with the author of the book Alex DeBrie on Sofware Engineering Daily podcast.
So, I began to look elsewhere. What were others doing in the JAMStack space? At the time #RedwoodJS was being launched, and Tom Preston-Warner was doing the rounds on many of the JS podcasts I listen to. They too had decided to steer away from FaunaDB, and were instead building their full stack JAMStack using postgres
and Prisma.io
.
Postgres, an SQL database... now that is a language I am familiar with... and open source also. But what is Prisma.io? I decided to find out if it could give me what I needed in my Netlify functions...
So away I went, starting with my familiar SQL database table creation. Comparing this process alone to working with FaunaDB, I would say the process of generating a FaunaDB GraphQL schema is actually easier, and a bit more streamline, however, its the next step that really blew my mind with Prisma.io...
Different to many of the ORM's the Prisma CLI has a function call introspect
. This basically looks at your database, and automatically builds the application model. From here, you need to execute a second function (what a pain!) call generate
and you are ready to go.
The generate
command, and I will take this straight from the prisma.io documentation, "provides a programmatic and type-safe API to read and write data in the shape of your application models".
Within a few hours, I was already feeling very comfortable with the API's, and actually felt quite powerful. The API's are a direct match for the relational data models, and I am able to do most things I can do using SQL, however, I am using JavaScript, and benefiting from the type-safe API's at the same time. And if i run into limitations, I can simply write an SQL Script, and execute it using a Prisma API
So, for now, I think I will stay in the familiar city of SQL.. I am still really keen to learn more about noSQL, and FaunaDB. I biggest hope for FaunaDB is that they come up with an API similar to prisma, which abstracts away come of the complexities of FQL. SQL is on the road-map for FaunaDB.. I will definitely be watching this space.
Top comments (2)
Interesting feedback, thanks for the writeup. It's definitely true that FQL is designed differently (and there are a few good reasons for that, one is scalability). One cool aspect of it is that you can easily generate it so the future is bright, it wouldn't be surprising that we see more and more community driven libraries that generate FQL to make it easier to write. That said, we are also working hard on simplifying it and making it easier in the future.
If you do ever give it another try here are some good resources:
And if you have feedback of what could have helped you learn it more easily, let us know :)
Thanks for this awesome article, James! 🚀
Great to hear you liked Prisma (I work there 😄), so definitely happy to answer any questions about the Prisma tools that are coming up on your end! 🙂