The Journey So Far
A year ago our development team was going through several initiatives. The first one was a migration of some super legacy code out of form submission into Web Api backed pages, the second of which was creating a search page that allowed our company's internal users to run audits that would previously require a ticket for development to create a SQL query to audit. These of course were not the ONLY initiatives going on, these just happened to be ones I was most involved in.
As part of doing some R&D for the search page I happened across this concept of GraphQl. It seemed extremely fitting, as it would allow the front end to shape the query it needs. This allows dynamic columns to be selected for the results, so if there's a piece of information you aren't showing in the UI, you don't need to query for it. The tooling available for GraphQl also made our front end developers happy, so it stuck.
I then got more involved with the legacy code clean up. This had run into a lot of issues because there was a lot of pages that needed similar-but-not-exactly-the-same data to each-other, making traditional Rest APIs run into the traditional battle between doing 5 API calls per page load or building apis catered to specific pages, reducing reusability.
"But wait", I thought, "I already made this information available in the Search Page I built". With extremely minimal effort, I could make these pages read from that data, add in any extra we need, cut down on duplicated code used for querying and standardize the way we retrieve information across the board. We all agreed early on not to utilize GraphQl for writing/commands. GraphQl mutations don't feel like they offer too much of an advantage over REST.
So we built a vision, kind of a half baked CQRS stack, where we could have micro-services writing into/maintaining databases, and one big GraphQl application that acts as the de-facto query application across the board.
We never achieved that vision, but we did get a decent chunk of everything reading from GraphQl, and had a lot of plans in place to make everything read from it.
It Was Great
I feel like a lot of the nay-sayers for GraphQl haven't felt how great it is to be able to write a GraphQl query, and have it get any piece of information you need back without needing to worry about all of the efficiency/permissions complications you've already solved in the back end. There were at least 3 cases where I was able to put out some crazy critical fixes with only a few one line changes that would have otherwise required changes in the Rest API.
Being able to traverse between two domains (Websites and Inventory, in this case) within one query is just plain awesome, which is something that historically had been a pain point for us. Sure it was a bit odd translating the way we represented Inventory in Elastic Search to play nicely to fit cleanly with how the rest of the system worked, but once it worked, it worked great.
It also was nice that these GraphQl queries often spat the information out in a means that was most of the time directly usable by the UI, there was no need for us to manipulate the results of multiple Rest API calls into the shape the UI needed, it pretty much just worked.
There's Always a But...
The astute probably already picked up on it when I mentioned it earlier. I didn't notice it until it was too late. Inventory in the database that GraphQl sits over often holds duplicate entries for the same product. This created for some awkward traversals and aggregations that didn't really fit with the way that the rest of the GraphQl Monolith acted as an authoritative database. Products didn't have a unique identifier, because they just simply were not unique.
If we wanted to build a search engine for products specifically backed by this GraphQl application, you could not do it without getting duplicate results.
This is kind of the crux of the problem. Monolithic GraphQl applications are GREAT when they sit over authoritative databases. If you have, stored somewhere, THE product, THE website, THE user, then it's just a matter of figuring out a way to draw a relationship between those objects and you have a super friendly API.
Having these specially designed query databases is very common in companies that embrace CQRS. Databases that hold information in a non-authoritative way just for the sake of being super fast, or super searchable, or super aggregate-able. But because they are designed for solving one particular problem, they often can be problematic to relate to other, more authoritative, data.
Looking to the Future
It's probably worth mentioning that I still will argue that GraphQl is vastly superior to REST for querying data in most cases. I will always be pushing to support it on as many applications as possible.
There has been a lot of discussion around getting away from centralized authoritative databases. There are a lot of advantages to this, namely reducing the amount of coordination two teams/database may need to do around maintaining the same database. The advantages here are for another article, I highly recommend looking up Domain Driven Design, and CQRS. But as we get further down that road, it's going to get harder and harder to maintain a GraphQl application that's capable of bridging every single database and every single domain in a coherent, consistent way.
Top comments (0)