Whether it’s fulfilling API requests or processing jobs, concurrency is usually a necessity in most software jobs.
While some programming languages (or runtimes) can leverage more than one concurrency model, some don’t and some have one that fits best or that they are primed for. Concurrency entails complexity and as software engineers we are to decide where that complexity lives.
So based on the programming language (or runtime) and its “primed” concurrency model, the complexity either chiefly resides in the code or the infrastructure. We have the most control and flexibility in the code, where function calls are fast and we have a menu of patterns at our disposal.
But that’s not all.
Our applications need to be effective for the use-cases they fulfill and cost-efficient when processing them as we never know when great economic or other external constraints will manifest.
We can’t plan for all future eventualities (unlike Dr. Strange). We should however not use that as an excuse to “let’s just spin up more servers/money” or to not do our best. That means making wise choices and using the right tool for the job that will be the most resistant to needing a future rewrite, replacement or Frankenstein-ing among other things.
If your choice of programming language + concurrency model is strongly bound to the number of CPU cores or can only really leverage horizontal scaling - you’ve limited your options, probably without even telling your manager.
Thus handling tasks that are actually or perhaps even somewhat CPU-intensive and that entail parallelism at a high scale will likely not go well without additional “solutioning”.
Some CPU Intensive Tasks
- RegEx
- Map or reduce operations
- Cryptographic operations
- De/serialization
- Image comparison
- Document generation
Congratulations!
Now this means “solutions” must be added to the infrastructure to support the shift in design like: cluster/node management, orchestration, perhaps a scheduler, more complex monitoring and traceability among others and opens the door to requiring extreme horizontal-scaling - all instead of choosing something with native concurrency. Worse yet, it becomes extremely easy to couple application architecture with a particular deployment model or ecosystem (re: infrastructure).
So my fellow software engineers; choose a programming language that has native concurrency, one that can handle “the gamut” of tasks (CPU-intensive, I/O, multiple jobs) with good efficacy and low cost.
Top comments (0)