DEV Community

Cover image for Optimising Python workloads for Kubernetes

Optimising Python workloads for Kubernetes

Author: Jothimani Radhakrishnan (Lifion by ADP). A Software Product Engineer, Cloud enthusiast | Blogger | DevOps | SRE | Python Developer. I usually automate my day-to-day stuff and Blog my experience on challenging items.

Intro

This blog is about how I solved the problem of shrinking a loop processing time in Python from 30sec to 5sec (6x)

I had a use-case where the python script calls Jira API and scraps data, doing some calculations and creating a tuple. On average, it took approx. 1s per loop, say there are 30 tickets --> 30 loops and 30s.

In order to make my script powerful, I have to understand the concurrency and that’s when I explored this crucial concept in python and an idea popped for this blog :)

A quick tip: You should use threading if your program is network-bound or multiprocessing if it is CPU-bound.

Let’s briefly catch up about Multi-processing vs Multi-threading vs Asyncio

Multi-processing

Multiprocessing

  • For powerful computation related queries. Multi-processing Python program could fully utilize all the CPU cores and native threads available.
  • Each python process is independent of each other, and they don’t share memory.
  • Performing collaborative tasks in Python using multiprocessing requires the use of API provided by the operating system.

Multi-threading

Multithreading

  • All processes shares the same memory i.e it runs as a single process with multiple threads way which is good for i/o bound process.
  • Caveats: Pre-emption - CPU has full power to revoke or reschedule any running thread.

Python Asyncio

  • Single process and single-thread and make better use of CPU sitting idle when waiting for the I/O.
  • Event loop in asyncio which routinely measures the progress of the tasks. If the event loop has measured any progress, it would schedule another task for execution. Therefore this minimizes the time spent on waiting for I/O.

Based on the use case, I used multi-threading since the function is very quick and ephemeral (killing and restarting a thread will not create problems to the function)

Running it in Kubernetes

Okay! What are the points to be considered while running this in Kubernetes?

  • Requests – Soft limit of the resource
  • Limits – Hard stop for a resource (CPU / RAM). Post this hard stop usage, OOM will come into play.

However, if we don’t specify any resource limits, Kubernetes decides and assigns resource dynamically

Coming back to my use case

I provisioned a pod with 1 CPU and 5 threads; this needs to be calculated and initialized based on the nature of your function. (we can discuss this thread allocation process as a separate post.)

We all know that resources can be controlled using requests and limits, and kubernetes manages automatically.

From k8s doc:
If the node where a Pod is running has enough of a resource available, it's possible (and allowed) for a container to use more resources than its request for that resource specifies. However, a container is not allowed to use more than its resource limit.

Happy processing!

References

Join us

Register for Kubernetes Community Days Chennai 2022 at kcdchennai.in

Top comments (0)