loading...

What are the cons of GraphQL?

twitter logo github logo ・1 min read

I bet everyone here has heard about, used and loved GraphQL over REST API. I also know the reasons why GraphQL is better than REST API.

BUT

Everything comes with drawbacks.

Even today, not all new startups and apps use GraphQL in their backend.


What could be the reason?

What is the biggest hurdle you face when you learn/work with GraphQL?

And what are the solutions to it?

What are the CONS of GraphQL?


I am asking just for the sake of knowing why some people might not like GraphQL.

I ❤️ GraphQL.

#devDiscuss

twitter logo DISCUSS (44)
markdown guide
 

I think the biggest con of GraphQL is the loss of network caching. You can't use thousand year old HTTP caching and various proxies but it might not be a problem, depending on the case. This is a great explanation phil.tech/api/2017/01/26/graphql-v...

 

I don't know if I agree with this. You can have a similar cache setup as you do with a REST api as long as you, say, hash the graphql query or something and just hit the cash server side if the graphql query hashes match.

Most clients aren't sending a variable number of different graphql queries anyway. Most have a fixed set of queries that get sent from different pages.

It may not be as straightforward as REST caching, but it's not very complicated either.

 

I think we're talking about two different types of cache. You can definitely do server side caching with either. After all if you're on the server you own the data so you can cache it however you want.

I meant network servers, as the HTTP spec says that GETs are idempotent, network servers (proxies, cdns, edge caching services) cache data, that can't easily happen with GraphQL by default, because all queries (even read queries) are transmitted using POST:

Due to the way GraphQL operates as a query language POSTing against a single endpoint, HTTP is demoted to the role of a dumb tunnel, making network caching tools like Varnish, Squid, Fastly, etc. entirely useless.

(taken from phil.tech/api/2017/01/26/graphql-v...)

It's not the end of the world, but it might mean that you have to get creative with your data for the fact you're suddenly losing the advantage of proxies.

There are standards popping up but on one side you have a decade year old system that works (and it's widely used by clients, networks and servers) for caching and on the other side you have to control both the client and the server if you want to have caching done right.

As the author was asking about the cons, and this I think is a con :)

Ah okay, I see what you’re saying. I was getting the network layer and application layer mixed up.

Though, while that’s true for now, I don’t see why tools like varnish couldn’t be extended to cache GraphQL responses at the network layer. If the adoption of GraphQL keeps increasing, we’ll eventually see solutions.

Just to add a bit to the discussion, I would encourage anyone interested in this thread to listen to this talk from GraphQL Summit on HTTP and caching in general with GraphQL. It was given by a senior platform engineer at GitHub. He goes pretty in depth comparing caching in GraphQL vs in a traditional REST API (Including HTTP caching). I think he does a good job of explaining the pros and cons with both patterns. youtube.com/watch?v=CV3puKM_G14

 
 

I k ow from within react builds using Apollo as the connector the individual caching is pretty solid and I haven't seen much in terms of drawbacks. Where I am wanting to cache the actual responses to serve to multiple individual connections i didn't find abstracting the backend calls with my own cache layer and serving / invalidating with a redis layer any more complex than I have with rest api's over the years.

 

Besides the mentioned drawbacks one that I see is that instead of relying purely on JSON they invented a custom DSL (wrapped inside a JSON), which comes with a custom parser that adds a substantial amount of JS to your code. In our case we we went from a ~380k bundle to a 600k bundle (before gzip).

 

Why do you have a graphql parser on your frontend?

 

Practically all clients (e.g., Apollo, urql) come with a query evaluation which requires a parser. This is not the schema parser, but it's still part of the graphql standard lib.

So if your client relies on the graphql package you have a parser in there, too.

I just took a look and neither urql, Apollo-client, react-Apollo, nor their dependencies have the graphql package as a dependency.

Unless I just missed it, you shouldn’t have a graphql parser client side

Sorry either you don't know these libs or you just like trash talk.

  1. Apollo ships with it's own implementation. It still parses the queries (it needs to for several reasons).
  2. urql has a peer dep to graphql. See github.com/FormidableLabs/urql/blo....

Still a parser is needed as all these libs perform validation up front and provide additional capabilities such as caching. They could all be added without additional parsing, but then it would be more cumbersome for the dev thus making the abstraction useless.

Jeezus man chill out. I just checked their deps on npmjs.org.

Didn’t know to check peer deps.
Nor did I know that apollo shipped their own.

You can do without. The Clojurescript client just sends a string over the wire, and get a Clojure map back.

That’s what I was thinking, but, I guess, Apollo needs some info on the query for it’s caching solution.

That makes sense. The Clojurescript library is more low level. It might be nice to have something similar to Apollo. But I quite like the simplicity of just binding the results of a query to some data in de dB. It's also easy to combine queries and subscriptions that way.

The big draw to Apollo for me is honestly not so much the caching, alththats a big plus, but the fact that GraphQL-codegen can make fully typed Apollo hooks for each GraphQL query or mutation I write.

It’s magical.

Clojurescript isn't typed so that won't work :P. Although you could build something similar using spec. You could even use generative testing out of the box that way for components created that way.

 
 

Queries can be more expensive to run - due to the multiple layers of resolvers and schema validation, nested queries can be more expensive. Schema validation is a big plus, don't get me wrong, but it does come with a trade-off.

 
 

Queries might be more expensive, but using data loader one GraphQL query could be one query to the database, which with REST it might needed hundred simpler calls.

 

it’s just beautiful how data loaders work

 

One more framework.
More code you need to maintain. (You still own the code.)
One more thing your team needs to know about.
Another item you have to explain to a junior.
More runtime "fun" to deal with.
Your project is that much more complicated.
It's a leaky abstraction.

 

GraphQL isn’t a framework though it’s a protocol. It’s another protocol layer on top of HTTP, sure, but it doesn’t force you into any way of building an app like a framework does.

What makes you think it’s a leaky abstraction? It seems like REST is a much leakier abstraction over any dataset since it’s much more work to modify a rest api and clients than it is to correct a single graphql resolver.

 

The official documentation recommends:

"With GraphQL, you model your business domain as a graph"
"Your business logic layer should act as the single source of truth for enforcing business domain rules"

I realize not everyone does this, however these two statements do force a constraint on your application. But yes, it can just be a protocol if isolated enough.

It's a leaky abstraction because your client needs to have a specific implementation on the front end. And I would say it is easier to change change a specific API endpoint that has few dependencies than an interface that has many more clients.

Comparing REST to GraphQL is apples to oranges.
I don't really have any bad feelings about GraphQL, its just another hammer to hit nails with.

Your client needs knowledge and implementation with REST as well since the structure of data is static and defined server side.

And if you change a rest endpoint, you’d likewise have to change all clients, since they have no say in the response’s structure.

The abstraction Grapqhl provides allows you to make api changes server side without affecting the data structure the clients expect.

And yeah there’s no heat here, sorry if it came off like that 😅

Hammering out these small differences can provide some good insights is all.

 

I partially agree with most of what you said, but please explain how you have less static types in your code? Schema validation comes built in, and works both ways (you can't receive an invalid parameter nor can you send)

 

My bad, I forgot graphQL has this feature. One of our teams uses the Apollo client plug-in for vs code, and I remember it comes with schema checking. Fixed.

 

I've worked with GraphQL quite a bit in both personal and professional settings. In my experience working with several teams in both new and existing GraphQL APIs, the biggest challenge has been schema design and management.

Maintaining a large schema can be really hard, especially when working across multiple teams with varying levels of experience with GraphQL. With REST, each endpoint is mostly isolated, which makes it easy to either make changes to or move endpoints to a new api version. In GraphQL, the schema is supposed to stay version-less by evolving over time. This means you need to think carefully about what you include in your GraphQL types, and how you structure your type relationships. If you create complex relationships that your clients begin to rely on, it can really hurt you later on.

The advice typically is to make small incremental changes to your schema as needed, but for many teams inexperienced with GraphQL, they don't take the time to focus on really understanding how their data graph should be structured.

Not to say that it isn't easy to mess up the design of a REST API, but I think in a lot of cases it is harder to fix a poorly designed GraphQL API due to the lack of versioning.

 

Yes, where with REST you risk having small differences between different endpoints causing errors. We work api first with REST, but there are at least 3 ways of doing pagination, and that's with about 50 developers.

 

Absolutely true, although I would mention that if you are running a Federated schema for multiple services managed by multiple teams, you can do the same thing in GraphQL unless you enforce a strict pagination standard like Relay.

BTW, just wanted to say thanks for your great talk on using Kafka to back subscriptions at Summit. We've been looking into doing something similar to back our first subscription service so it was great to hear some of your insights :).

 
 

Any tech is only preferred / the best if it meets a requirement. For our company, our infrastructure it makes absolutely no sense. We accept requests, post them and push back low volume, data structures. There is no need for any of the use cases I see being used to flog graphql.

 
 

I would say as you mentioned just the knowledge and learning, which is no worse than any new tooling.

Of that the one draw back I have is within the query syntax itself on consumption and getting to grips with the query/mutation fragmentation syntax a bit of a steep learning curve but easily surmounted. :)

Where rest does win is often with simple get calls for micro data where mapping the fields will take double the time to schematics forward. In these cases I have abstracted the calls into a simple get endpoint alongside my app build to save the bloat on the frontend.

Very little other than that I can think of as far as cons.

 

Another thing to keep in mind in addition to everything that's been mentioned already is that instrumentation requires a little additional effort. You normally get response time metrics almost for free with the majority of frameworks but you're hardly gonna get resolver-level granularity instrumentation with GraphQL unless you build it yourself.

 

One thing that hasn't been called yet is the the spec itself or pretty open. In practice it's not a big problem, because most clients and servers take inspiration from how Appolo did it. For example some servers and some clients allow to do queries and mutations over a websocket, but some don't.

 

I love it as well. A few down sides is consumers have learn a new way of requesting data. Over selection is something you have to handle. Error handling is strange and everyone does it different. I think the advantages out way these issue but not everyone agrees.

 
 

You can add caching by 'hashing' post data or by adding new header like here developer.akamai.com/blog/2018/10/... . The cons can be only overenginering the project

 
Classic DEV Post from Sep 2 '19

What do you think of minimalist UI?

Kumar Abhirup profile image
I am a 16-year-old JavaScript React developer from India who keeps learning a new thing every single day. 😍 Twitter: @kumar_abhirup Website: kumar.now.sh

dev.to now has dark mode.

Go to the "misc" section of your settings and select night theme ❤️