I want to start this discussion because of this
gRPC-Web is going GA
What do you think about Grpc for frontend
I want to start this discussion because of this
gRPC-Web is going GA
What do you think about Grpc for frontend
For further actions, you may consider blocking this person and/or reporting abuse
Seena Khan -
Minhaz -
Safdar Ali -
Peter Mbanugo -
Top comments (10)
Whats the debate? It just completes the Web arsenal, you can choose whatever is best suited for a specific communication channel.
Or a mix of them. Considering that gRPC is build on HTTP2 it should feel natural in the front-end ecosystem.
Each method has its own pros/cons and a best use case scenario.
Long polling still has pretty nice usage semantics IMO. It is uses a short-lived request/reply connection style so both sides are already prepared to cleanup on failure. Tactics that depend on keeping an open connection can get tricky in failure scenarios. Plus with long-polling you can get true pull-based semantics. So if you suddenly receive a spike of data and get overwhelmed, you can stop sending new polls until you catch up. No need to disconnect, finish processing, then reconnect, then deal with failures to reconnect, etc. On long poll failures you can just keep retrying and not worry too much about the connection maintenance.
It is too bad that the technical implementation of long polling is so inefficient. I do not know of a newer equivalent with these semantics. Tell me if you know of one. I found a few links discussing hacking long polling on top of gRPC, but nothing concrete or official, and no word on its resource efficiency.
Long polling was a hack because Websockets were not ready, I do not see a reason why would you not keep a connection open for all time and implement your own PULL or PUSH logic on it. It is more efficient from all points of view.
If the connection drops it affects any protocol so that does not count.
Keep alive does that behind the scene too (reusing connections), also I've read about streaming with gRPC.
When I did the logic for keeping the connection open, I also had to have a queue in case of activity spikes. Then I had to monitor whether that queue got too full. Then I had to take action on the connection when it did. Then I also had to handle connection failure notices. Which made the connection a shared, stateful resource between the two competing concerns. Which opens up another can of concurrency worms. E.g. The connection closed, but it was because the queue monitor closed it, so don't reconnect.
Whereas if I make a request/reply of specific duration, most of the connection-handling code goes away. I don't have to disconnect for being overwhelmed because the whole thing shuts itself down after a set duration anyway. And I can just wait until I'm done processing the spike before making the next request. Connection failures in this case tend to throw exceptions instead of being handled as events. So, the remediation to that is easy: catch, backoff, and retry.
I should also note that this was for processing event streams. So a) it is totally possible to receive more messages than the code can process and b) I can request the server to send me messages from where I left off. So nothing is lost from being disconnected for a time.
Certainly websockets are better for a lot of common scenarios. Long polling started as a hack because request/reply streams were not meant for server pushes. But the hack actually turns out to have discovered a nice model for pull-based streaming of live data. Now we just need a more efficient implementation.
Ok that sounds too strange to ask, but pulling can be done using any of the methods I mentioned, including websockets.
Can be done, but not simply. For example, if you consider sending a pull-based time-boxed request for data (similar to long poll) across web socket. Since web socket is bidirectional async, the response is in no way connected to the request, could arrive at any time (or never) and in another thread. So you have to setup a state machine to track the state of each request, timeouts to give up on a response. And there is a potential for a lot of byzantine problems like out-of-order responses due to GC pauses, lost packets + network buffers, and whatnot. Not to mention dealing separately with connection interruption events. A lot of these problems are already handled by using a regular HTTP request. A good library on top of websockets could abstract this away from you like HTTP libs do for TCP. Let me know if you know of any good ones.
Oh, I think I see what you are saying. Create a web socket connection and automatically close it after a certain period of time, then reconnect and go again when the client is done processing. It would be an interesting scenario to test. Standard HTTP requests by now have mitigations for short-lived connections, but I'm not sure if they apply to web sockets. Need to test to be sure. Thanks for bearing with me. :)
I think you have in mind a specific usage that is not so popular and have rather complex requirements.
You mentioned threads, and requests and out-of-order responses, and streams, I can't think at a scenario involving all these.
You usually only need to refresh some non-real time data and make requests, you do not send a new request until you received a response from the last one.
Or have a stream of data and it can keep coming, usually you do not process them on the front end so you do not end up having a lag, and more you do not have different requests. Also JS is single threaded. No need for a request, you have a connection open so you need the latest data, otherwise it would not require a websocket.
Yeah, the experience where I wished I had a long-poll capability was on the back-end processing events. Maintaining an open connection requires a surprising amount of coding to handle failure cases. Whereas a long poll can give you realtime-stream-like behavior in normal cases and a nice protective fallback behavior in spiky cases without worrying too much about the underlying connection. I think it is probably abnormal for web-socket users to run into throughput constrained situations where they need to protectively disconnect. It's not our normal usage pattern either, but it was a possibility I had to code for.
Not a universe I'm super familiar with, but seems like a logical step to keep up to date with how things are being done on the web.