DEV Community

Vaidehi Joshi
Vaidehi Joshi

Posted on

To Queue Or Not To Queue

When I first learned about background jobs and scheduling workers, I was only in my first six months of learning to code. At first, I didn't question how it all worked. I just tried to learn how it worked and to get it working.

But after a few months, I started wondering how all of it worked; at the same time, though, I was feeling so overwhelmed by all of the new things that I was being introduced to that I didn't even try to answer the question of how does it all work that was lurking in the back of mind, and instead focused on simply trying to get it to work in the first place.

Then, a year later, in a job interview of all places, someone asked me about jobs and scheduling and how I thought it all worked. My guess? Off the top of my head, I said that jobs were bunch of objects in an array that you just iterated through and completed, and then removed when they were done. It turns out, I wasn't that far off from the truth.

My favorite thing about abstractions is that once you start finding a pattern of abstraction in one system, language, or codebase, it becomes so much easier to start recognizing those patterns elsewhere. And some patterns are used in so much of software that we don't really have to think about them, even though we use them multiple times in a day. One of those abstractions is that's used to implement jobs, workers, and even the simplest of network requests – they're called queues. And they're all over the place!

How does one queue up a queue?

How do you queue up with queues?

Queues and stacks are often taught together in computer science courses because they are really similar, even though their uses and implementations are starkly different. We already know quite a bit about how stacks work, and how we can only add and remove elements to them from one side.

This is where queues and stacks differ: they're structured and built up in totally different ways.

A queue is a linear abstract data type that can contain a long list of elements. But what's important to remember about them is how they can grow and shrink in size.

There are some terms we've probably heard before that actually are derived from the nomenclature of adding to or removing an element from a queue structure. The process of of adding an element to a queue is known as enqueuing, while removing an element from a queue is known as dequeuing. The interesting thing about how queues work, however, is where the enqueuing and dequeueing occurs.

If we think about queues in a deli, or an appetizing shop, things usually shake out like this: you take a number, and wait your turn to be called. The person who arrived first is the one who is served first. That's just how most lines tend to work, right?

The same goes for queue data structures: the first element at the front of the queue is the first one that'll be served – or, in most cases, processed, handled, or run, depending on what the element is.

Queues abide by and operate, for the most part, according to the first-in, first-out principle. The first element in the queue, or the element at the front of the queue, is always the first to be processed and removed from the queue.

If we end up forgetting everything else there is to know about queues and stacks, this is probably the most important difference to remember: stacks are last-in, first-out (LIFO) structures, while queues are first-in, first-out (FIFO) structures.

Queues and linked lists: the best of friends

One reason that queues and stacks tend to always come up together as parallel topics is that we end up writing similar functions in order to construct both of these data types. For example, even though they both have differently named functions, a lot of the functionality is the same:

  1. a stack's push function is similar to a queue's enqueue function
  2. a stack's pop function is similar to a queue's dequeue function
  3. size and isEmpty are really helpful functions to have around in general

Yet there's another reason why they come up together and seem to be so intertwined: both of them can be implemented as either an array or as a linked list.

As we already know from last week's deep dive, implementing a stack as an array can result in a really messy stack overflow if the elements pushed onto the stack end up being more than the array's allocated size can handle. So when it comes to queues, what's the deal?

Well, it turns out that arrays can end up being even worse implementation tools for queues, depending on the situation and circumstance.

Arrays can be very powerful when we know the size of our data structure ahead of time. But there a many times (in particular with queues) when we don't really know how big our queue is going to get. So what happens when we have to enqueue an element? Well, if we don't know the queue size beforehand and we're using an array, it's probable that sooner rather than later, we'll run out of allocated memory and space. So, we'll need to copy over the contents of our array and then allocate more space and memory, and then enqueue a new element onto the end of the queue.

But there's yet another added complexity with array-based queue implementations: we need to be able to enqueue at the back of the array, and dequeue from the front – which isn't always terrible because accessing the first or last element in an array doesn't take too much time, but it's not as simple as with stacks, where the adding and removing of elements all happens from one end of the structure. As our array grows, we'll need to be able to access one end and the other, and the space time complexity will grow as a result.

However, with a linked list implementation of a queue, things get a whole lot simpler. We don't need to worry about the queue size ahead of time, since memory can be distributed and the linked list implementation of our queue can grow dynamically (just as long as we don't use all of our computer's memory). Enqueuing and dequeuing gets to be a lot easier, because we can simply find an empty space in memory, add a node with a reference to it's next neighbor. No need to copy and recreate our queue like we had to with an array implementation. And, if we add pointer references to the beginning and end of our list, we don't need to traverse the entire structure to enqueue or dequeue an element, either!

In fact, once you remove the need to traverse through the queue, the space time complexity of the enqueue and dequeue functions on a queue become constant time, or O(1); that is to say, no matter how big or small or queue gets, the amount of time it takes to add or remove an element remains the same, or constant.

Of course, it's totally possible and valid to implement both a stack and queue in either array or linked list form. But it's important to recognize the differences between these two implementations, and when one might be more useful to us versus the other.

Hidden lines, secret queues

Now that we know how queues work, what makes them different from stacks, and when and how we might want to implement them, let's answer the bigger question here: why does any of this matter?

Well, it actually matters a lot. The reason being: queues are everywhere. They're all around us! We just need to take a deeper look at how the thing we use every single day actually work underneath all of their abstractions.

If you're a backend developer, there's a chance that you've had to schedule background jobs, or workers, or tasks that have to run on a separate thread, or as a different process. The method of running a job or spinning up a task that runs multiple workers in the background at any given time is all executed through a job scheduler, which runs a queue in a separate thread and executes jobs in a FIFO manner – the first job that's added to the queue is run, removed from the queue, until the queue is empty and all of the background jobs have been processed.

If you're a frontend developer, you've probably worked with jQuery, and maybe even had to create animations using its API. The jQuery API uses queues to allow a series of functions to be asynchronously executed on elements in the DOM, and builds up a queue of “steps” that transition from one CSS value to another in order to create a smooth animation. In fact, jQuery's API actually has a dequeue() function that you can call to move from one element in the queue to the next!

But it's not just in our lives as developers or in the worlds of our codebases that queues matter. They also exist in the context of our individual machines!

The software that runs your operating system has its own queues of processes that have to run at any given time. But, generally, it's not as simple as running through a list of processes all at once; instead your machine has to run different processes of different importance at any given time. Sometimes, this is implemented as a multilevel priority queue, which schedules processes to be divided up based on priority order, and then for queues to be executed based their level of importance.

For example, a system process queue would generally take more precedence than a background process queue; so, the background process queue would have to wait its turn until the system process queue had run and been fully executed until the next queue could begin to execute.

But let's zoom out even further. What do queues look like on the macroscopic level? Well, it's not just our individual machines that run concurrent queues all of the time. Servers all around the world (running tons of different applications built all over the world!) also process things using – you guessed it – queues!

When multiple (think hundreds or hundreds of thousands) of machines request data or send data to an application's server, the server can be inundated with a lot of requests. Have you ever wondered how it handles those request? And how it processes all of them…without exploding?

Well, there's method to madness, and as it turns out, the method is a queue. Request queuing is the procedure of handling requests before they even enter an application to be processed.

For example, if a hundred people all try to access Amazon's website on Black Friday for a huge sale, a bunch of their requests will probably hit a single server that's the closest to all of them. That server will handle all of those those incoming requests by enqueuing them, one by one, into an incoming request queue. One by one, those requests will enter the application and be processed. And when it comes time to respond back to those one hundred machines with Amazon's homepage, the system will enqueue responses to be sent out in the order that they were processed – in an outgoing request queue.

Pretty amazing, when you think about it, right? One tiny little queue structure is what dominates the web, makes all the software go, and what helps us queue up and keep the internet running as the (almost) well-oiled machine that it is today.

Resources

Do you love to queue as much as I do? Well, in that case, you might want to check out these terrific resources!

  1. Stacks and Queues, Professors Robert Sedgewick & Kevin Wayne
  2. Linear Structures: Queues, University of New Delhi
  3. Scheduling, Professor Eddie Kohler
  4. Queues, Professor John Morris
  5. Queue: Linked List Implementation, Professor Robert Pitts

Top comments (3)

Collapse
 
koraa profile image
Karolin Varner

"As our array grows, we’ll need to be able to access one end and the other, and the space time complexity will grow as a result" – I can't quite follow that statement.

I am picturing a ring buffer based implementation which should give constant complexity for any operation (random access, access/push/pop front/back) except for a push on either side if the buffer needs to be resized. (Although one can get that to be quite quick too if we double the buffer size every time we need to grow). Am I missing something here?

Anyways, thank you very much for the fine Article :)

Best,
Karolin

Collapse
 
vaidehijoshi profile image
Vaidehi Joshi

Hi Karolin,

I'm not completely sure what your question is, but let me try my best to clarify! :) What I was trying to get at with that paragraph was the idea that array-based queue implementation—that is to say, when we use an array data structure to store memory contiguously, rather than a linked list which allows memory to grow dynamically—if we want to be able to access the beginning and end of the structure often, it's helpful to know the size of the array beforehand. The problem is that most of the time, as our queue grows, we don't know how bit it will be. In other words, we don't know the size of the structure beforehand, so if we're appending elements to the end of it, there's a good chance we'll exceed memory and have to copy over the contents of the array into a new structure. Comparatively, a linked list queue implementation is generally much better because it can grow dynaimcally in size, so we don't need to copy over data from memory to accommodate more data being enqueued.

Anyways, this is what I was trying to communicate in that sentence, hopefully this helped! Thanks for reading—and for your kind words!

Cheers,
Vaidehi

Collapse
 
koraa profile image
Karolin Varner

EDIT: Actually disregard what I was saying. Of course you where talking about a plain array where you would have to move every element one to the left, I was jumping ahead in my mind. Disregard please, sorry about the buzz :/

Hmm. Not sure here. I think I understood what you where saying, except the bit about "time complexity" becoming large with an array based implementation. IMO – if you are talking about algorithmic complexity and I am not sure you are – it should have a constant complexity for both enqueue() and dequeue() operations (as long as you don't need to grow the buffer).

The ring buffer I mentioned – which avoids a lot of extra copies – looks something like this:

struct {
 uint8_t buffer; // Our memory buffer address
 uint8_t buffer_length; // Size of our preallocated memory buffer
 uint8_t start; // First element location in our ring buffer
 uint8_t end; // Pointer to the last (user element) element in our ring n

Take a ring buffer that is completely full:
|a|b|c|d|e|
 ^start  ^end

After dequeue() returns "a" – we just increase the start pointer by 1, yielding one empty slot.
| |b|c|d|e|
   ^     ^end
  start

After enqueue("f") – we just increase the end pointer by 1 and then take it modulo the length of our buffer (4+1)%5=0.

|f|b|c|d|e|
 ^ ^start
end
Enter fullscreen mode Exit fullscreen mode

I suppose if you used a plain array – and not a ring buffer – you would in fact have copy all the elements to the beginning. Is this what you where picturing?

Personally, I also do quite a bit of low level programming where excellent performance is needed; linked lists are usually discouraged there because of the many syscalls to malloc() (or the equivalent on your platform are needed). So the overhead of the extra pointers needed in the linked list, the decreased cache coherency (how well the CPU can cache data from memory) and the many memory allocations can outweigh the impact of having to copy your data a lot.

There is also the factor that – if your queue is long lived – it may grow up to a certain size at the start of your application and for the rest of the runtime of your application remain at that size.

Well, anyways this is sort of a pedantic performance point but I am a bit of a pedantic programmer ^^, so thanks for responding!
I do certainly agree that the list based implementation is much easier to implement and understand than the ring buffer one!

Have a nice evening,
Karolin