Just a quick one I like to talk about! Rate limiting goroutines.
This is about controlling the actual amount of concurrent task executions.
Sometimes, we have to process and execute a stream of long-running tasks and we don't know at runtime how many of them are coming out from the task channel. So here, the main concern is not firing all the goroutines together, at the time tasks are ingested. Otherwise, firing many of them concurrently uncontrolled could lead to unpredicted behaviors or memory overflow.
limiter (AKA semaphore) empty struct buffered channel has been added into the mix, capped with the
count of tasks to run concurrently.
- For the first n-
countiteration an empty struct will be pushed onto the
- A goroutine is fired up to run the incoming task.
- At the n-
count + 1iteration, the
limiterchannel will be full, hence the current
maingoroutine will be blocked.
- Once any of the currently running tasks finish its execution, it will readout of the
limiterchannel, to make some room for another task to be run. This will unblock the
- After the
maingoroutine takes the control back, it will push an empty struct onto the
limiterchannel and start over the cycle by running a new goroutine for the incoming task.
And so on until the time out is reached, so the for loop brakes and no more tasks are run.
To sum up, this is how we can limit goroutines to come up all together by controlling how many of them could be up and running concurrently by using a
limiter buffered channel for it. Avoiding causing a memory overflow and unpredicted behavior.