DEV Community

Discussion on: Mixing synchronous and asynchronous requests for serious speed

Collapse
 
hugh_jeremy profile image
Hugh Jeremy • Edited

Maybe you misunderstood what's being called, and from where? Draft Sport API is part of Draft Sport, it's not an external service. If Draft Sport API doesn't respond to a request from inside the Draft Sport network boundary, then it's sure as hell not going to respond to a browser, either. Either way your system is FUBAR, you have critical failures, and your user is hung. So why choose a failure mode that results in a slow page load when the failure isn't present.

DOS? Whether a request comes from within your own network or a browser does not change your DOS attack surface. Either you have a public API or you don't. If you don't then you have a totally different architecture and this post is irrelevant.

As for local storage - I don't see the relevance. You are going to have to come up with a convoluted way to invalidate your local cache, for... A benefit I can't divine.

Collapse
 
demianbrecht profile image
Demian Brecht • Edited

Absolutely could be, sure. The site was 500'ing when I tried to check it out (I realize it's in alpha :)). I took a quick gander through the Github repo and saw an external request being made through the nozomi python package, so I just naively assumed that's what was being used (no idea whatsover what nozomi is). It's also relatively late so I may just not be grok'ing the situation entirely. My assumption was that you were calling out to an external service, so sounds like I was wrong there. So based on your reply, I'm now assuming that you have (or had) a front end service and a back end service within a single network boundary (for simplicity sake). There aren't any call outs being made to any service outside of your own. You /used/ to call the back end service from the front end JS asynchronously and have moved from that to call inline from the front end services server-executed code during page render.

Running with that, the DoS surface is still changed. Let's say you're using WSGI (one input -> one output) and the back end service slows down for whatever reason. Eventually (of course, depending on the load that you're dealing with), all of your available workers can become saturated due to response times (or timeouts) in the other service. So now the user is presented with an error page, whatever the web server you're using sends back once it has too many requests queued.

OTOH, if you ensure that there isn't any tight coupling between the front end and back end services, you can still at least present the user with /something/. Perhaps there's only a portion of the back end service that's running slowly and they'll still have the ability to navigate through the rest of your site, create a support ticket or get in touch with you, or whatever else you may have.

Of course, you can mitigate the potential issues of synchronous requests during page render with request timeouts and such, but you'll still deal with holistic service degradation with the same potential outcome rather than just a portion of your site.

So yes, the DoS surface does change depending on the chosen architecture. I'm not only talking about an actual malicious DoS attack, I'm talking about you accidentally DoS'ing yourself.

As for local storage - The problem statement was those "infuriating websites that present animated grey boxes while they fetch their content asynchronously". Using the local storage approach, you could conceivably:

  • Show the user the grey boxes (or whatever once)
  • Load the data asynchronously
  • Store the latest data in local storage

Next visit:

  • Populate the area with the locally stored data (no grey boxes anymore)
  • Load the data asynchronously
  • Overwrite local storage

(Nothing convoluted there)

Yes the initial data will be stale, but it gets rid of those annoying gray boxes. Like I said though, I'm not really a UI guy so it might be a bad idea. I guess it all depends on what's more important: not having the annoying gray boxes or the freshness of the data without a visible refresh.

These are just my thoughts after having dealt with systems that have experienced these kinds of issues at high load. It may very well be that you never run into them. I was just trying to share my thoughts and experiences, not trying to be condescending or negative at all.

Thread Thread
 
hugh_jeremy profile image
Hugh Jeremy • Edited

We'll have to agree to disagree, though I appreciate your interest in the post. The assumptions you are making are not applicable to the system in which these synchronous requests are being made. Though I am sure they are applicable in many systems.

To anyone else reading this comment thread and thinking "oh wow, I better make everything async!": Don't overthink it. Even if your system has the characteristics of the one Demian is describing, you might well want to take the 50% speed boost anyway. Make the risk/reward tradeoff.

Thread Thread
 
demianbrecht profile image
Demian Brecht

Fair enough :) And no worries, it's a well written post and Python and performance are both topics that are near and dear to me.

And absolutely. Risk/reward trade offs should always be considered. One of the main issues I've had with Python at scale is WSGI. With the release of ASGI servers and asyncio, many of those specific issues can be greatly mitigated if not solved entirely. I just thought that it would be helpful to address this particular issue as I've been bitten by it in the past. Having to re-architect pieces of a system under such circumstances is not a fun thing ;)