DEV Community

Aizaz Khaja
Aizaz Khaja

Posted on

If you're building for 'scale', what would your approach look like?

Recently, I was overlooked for an opportunity because they wanted someone with production experience of building something out at scale for a user base of 100k or more. While that's good feedback to start with, I had trouble visualizing 'scale'. I mean, I currently work for a company with a user base of approx. 300k but I didn't build its frontend/backend that got it this far.

What does building for scale even mean? What does it look like on the front-end vs back-end?

When I think of scale, I'm thinking DevOps and ensuring your servers can handle 'scale' and setting up distributed databases, using caching (Redis).

So if tomorrow you were to start up a project, full stack work, how would you ensure it's scalable so that it can handle a mass influx of users the next day (extreme example much)?

Top comments (5)

Collapse
 
rhymes profile image
rhymes • Edited

So if tomorrow you were to start up a project, full stack work, how would you ensure it's scalable so that it can handle a mass influx of users the next day (extreme example much)?

Hi Aizaz, it's a really good question but without a very simple answer. Scaling doesn't mean the same thing for everyone, for every project or for every company. Scaling is also dependent on time and money, time to market and developer productivity so there's no silver bullet for it.

I'm going to use a recent tweet to help me here:

Sam is saying a thing that I've learned through the years. Scaling is not an abstract concept, meaning that it can't be detached from the problem at hand (as there is no single way to achieve "scale"). An extreme version of his example could be: "I'm choosing technologies A, B and C because I've heard they scale, but I have no expertise in them, so I spend all the time working against them, neglecting the product and possibly missing the time to market window". Maybe in this example the person could have used X, had had a slower product, made some money out of it and bought some server breathing room in the meantime :)

So, what does scale mean? Technically it means the ability of a system to handle a growing amount of work. But how that is going to be achieved wildly varies, depending on the problem at hand. There are general techniques: to scale algorithms, to scale IO bound operations, to scale CPU bound operations, to scale databases, to scale networking, to scale application servers and so on :D

Another big factor to keep in mind is that scaling and measuring usually go in pair. You have a baseline, you figure out roughly how many user your app can handle with the current architecture and then you start from there.

It also depends on the traffic patterns and which type of app you are creating. If you have to setup a government website for a new initiative that benefits 10% of your population and you know that the opening day millions and millions of users are going to register... well, you need to be prepared for that. How? Measuring, load testing, simulating real traffic, deploying all the tricks in the books and so on.

What does it look like on the front-end vs back-end?

What do you mean here? The frontend runs on the user's computer so there's not much to scale (optimize yes, but scaling I'm not sure). On the backend we go back to the infinite combinations of scaling possibilities :D It might mean tuning the DB, it might mean putting as much content as possible on a CDN, it might mean having a cluster of cache servers, it might mean upgrading a dependency that leaks memory and much more.

When I think of scale, I'm thinking DevOps and ensuring your servers can handle 'scale' and setting up distributed databases, using caching (Redis).

I think "devops scaling" as a part of the whole scaling landscape. You might not even have servers in your care and run on a serverless platform. Distributed databases come with considerations about complexity, data integrity, consistency in reading and so on. So does caching, the running joke is that cache invalidation is hard.

So if tomorrow you were to start up a project, full stack work, how would you ensure it's scalable so that it can handle a mass influx of users the next day (extreme example much)?

It's not a perfect science but it's not a guessing game, you can't build up a project knowing it's going to be hammered the day it's public without having prepared :D How do you get there? By building the MVP, measuring it, testing it thoroughly with different traffic patterns, overloading it, understanding what's the cost of acquisition of a new user, understanding what happens if you go from a 1000 to 10 thousand to 300 thousand users.

A few general tips:

  • test (the code, the system and so on), bugs can be a limiting factor in scaling
  • memory leaks can be a big issue, but they might not be for a while (going back to the concept highlighted by the tweet: if you leak 10 MB of memory per day and you can cope with restarting the servers once in a while, the money you save can be put to buy more RAM while you focus on a key feature, the debug can wait)
  • cache expensive computations if you can, cache HTML as well if you can
  • do as much as possible out of band (the user shouldn't be stuck for seconds waiting for an operation to finish, which in turns means that your system will crawl if all users are doing the same operation at roughly the same moment)
  • memory is faster than disk (so if you can put stuff in memory instead of putting it on the disk, it's better)
  • use load balancers (but most PaaS already have them)
  • start with a queue to distribute the work load, simple architectures in the beginning are better than complex ones
  • use HTTP in all of its capacity (conditional caching, proper codes, and so on)
  • prepare for the worst (what happens if something goes down?)
  • understand the cost of adding one more instance: every new server brings with it new connections to your data storage for example

You'll notice I didn't talk about specific technologies or languages or frameworks because I think they ultimatey don't matter that much (unless you have specific requirements that can be fullfilled by this or that tool).

Also, remember to build your product in the meantime, delegate as much as you can to the right tools. Don't reinvent the wheel until you know you need to.

Collapse
 
dmfay profile image
Dian Fay • Edited

So much of building high-capacity and high-workload systems is about the code you don't write ;)

Collapse
 
danku profile image
Daniel McMahon

An interesting question Aizaz - there is a lot of bias and implied meaning for every indivdiual when you consider Scalability. Here are a few things that I would consider noteworthy when it comes to approaching the 'Scalability' topic:

  • Design Patterns -> how you handle your front & back end setup, can you scale out your servers or upscale your load balancing to optimize for high traffic spikes? This becomes more complex depending on your application/website -> you can imagine a statically generated website would take minimal upscaling, however when you have a website that is powered by 5/6/7+ APIs/Microservices they may all have to scale in parallel. How on earth can you manage that if you just have a single deployment of your application and APIs (particularly of note is state management)? Caching as you mention is a big consideration for this -> having fast in memory storage for your application to pull from instead of the main DB which may have I/O blocks if you're application is scaled is a good consideration. Also being smart with how you scale, and what you scale -> you may be able to take advantage of an elastic file storage system for your deployed containers to use instead of deploying multiple containers that contain a few gigs worth of images. It's all about the optimization patterns you can consider in advance.

  • Technologies -> Docker & Kubernetes are the big go to at the moment for the DevOpsy handling of your Scalability. You can easily have a service running on a single container in Docker but tooling like Kubernetes allows you to react to traffic changes in near real time to ensure you have additional resources/pods allocated as the need arises. This can address issues like scaling out the microservices in sync with your main applications requirements (if they're all running on Kubernetes). When it comes to DevOps you also need to consider your monitoring & alerting, how can you keep an eye on all the things simultaneously? And also how can you define reasonable SLOs/SLAs for your services that match your customers use cases? There are interesting open source monitoring tools in this area, its worth looking at open tracing as a large scale insight.

  • Cost/Speed -> being clever with how you setup your application to scale is vital. Consider parallelism if you're application needs to do some modelling (and is not strictly just a web application) how are you going to train your models in a cost efficient way? Usually companies go down an EMR cluster with Apache Spark for handling distributed data as opposed to running a single EC2 instance or local machine doing all the training. Parallelisation is vital to speed. This is also evident in the development and setup of services like Web Crawlers -> sure you can have data scraped from a website but how can you parallelise it in a way that you can manage the state if say the DB crashes half way through a crawl? Interesting solutions to these things might lie in the adoption of Actor frameworks like Akka.

  • Language Restrictions -> languages are tools and everyone has bias towards set languages. However when it comes to scalability you need to consider things like the fact Node may only have single threaded functionality (with an event-driven loop) versus the native concurrency options afforded by JVM frameworks like Akka. Or even in terms of programming for scale, some argue that functional programming paradigms are more powerful and efficient (less side effects) than object oriented (I won't debated either/or but as an example you can see how Java has spread out into Scala/Kotlin etc.). Another consideration is the level of abstraction of your language, if you want high speed and scalability you may need to use a lower level language i.e. GO/Rust as opposed to the JVM. You may see interesting moves in web dev in this area when it comes to Web Assembly. Even a step outside of this may be using your own custom built packages instead of off the shelf packages as they may not be built to support concurrency or parallelisation

Anyway that's just some immediate food for thought -> I don't believe there is a one approach fits all solution, but the above are a couple things to consider for scale when starting out! I know a few devs use the c4 model for inital setup of applications, its a worthwhile tool to checkout if you're considering the larger scale of your applications from the beginning of development: c4model.com/

Collapse
 
rhymes profile image
rhymes

Aizaz, you're in luck! Vaidehi Joshi published an article in her "intro to distributed systems" series... about scalability :D

Collapse
 
solkimicreb profile image
Miklos Bertalan

The 12 factor app manifesto is a very good and tech stack independent read about this. I try to stick to regardless of the size of the system I am working on.