DEV Community

Discussion on: To Queue Or Not To Queue

Collapse
 
vaidehijoshi profile image
Vaidehi Joshi

Hi Karolin,

I'm not completely sure what your question is, but let me try my best to clarify! :) What I was trying to get at with that paragraph was the idea that array-based queue implementation—that is to say, when we use an array data structure to store memory contiguously, rather than a linked list which allows memory to grow dynamically—if we want to be able to access the beginning and end of the structure often, it's helpful to know the size of the array beforehand. The problem is that most of the time, as our queue grows, we don't know how bit it will be. In other words, we don't know the size of the structure beforehand, so if we're appending elements to the end of it, there's a good chance we'll exceed memory and have to copy over the contents of the array into a new structure. Comparatively, a linked list queue implementation is generally much better because it can grow dynaimcally in size, so we don't need to copy over data from memory to accommodate more data being enqueued.

Anyways, this is what I was trying to communicate in that sentence, hopefully this helped! Thanks for reading—and for your kind words!

Cheers,
Vaidehi

Collapse
 
koraa profile image
Karolin Varner

EDIT: Actually disregard what I was saying. Of course you where talking about a plain array where you would have to move every element one to the left, I was jumping ahead in my mind. Disregard please, sorry about the buzz :/

Hmm. Not sure here. I think I understood what you where saying, except the bit about "time complexity" becoming large with an array based implementation. IMO – if you are talking about algorithmic complexity and I am not sure you are – it should have a constant complexity for both enqueue() and dequeue() operations (as long as you don't need to grow the buffer).

The ring buffer I mentioned – which avoids a lot of extra copies – looks something like this:

struct {
 uint8_t buffer; // Our memory buffer address
 uint8_t buffer_length; // Size of our preallocated memory buffer
 uint8_t start; // First element location in our ring buffer
 uint8_t end; // Pointer to the last (user element) element in our ring n

Take a ring buffer that is completely full:
|a|b|c|d|e|
 ^start  ^end

After dequeue() returns "a" – we just increase the start pointer by 1, yielding one empty slot.
| |b|c|d|e|
   ^     ^end
  start

After enqueue("f") – we just increase the end pointer by 1 and then take it modulo the length of our buffer (4+1)%5=0.

|f|b|c|d|e|
 ^ ^start
end
Enter fullscreen mode Exit fullscreen mode

I suppose if you used a plain array – and not a ring buffer – you would in fact have copy all the elements to the beginning. Is this what you where picturing?

Personally, I also do quite a bit of low level programming where excellent performance is needed; linked lists are usually discouraged there because of the many syscalls to malloc() (or the equivalent on your platform are needed). So the overhead of the extra pointers needed in the linked list, the decreased cache coherency (how well the CPU can cache data from memory) and the many memory allocations can outweigh the impact of having to copy your data a lot.

There is also the factor that – if your queue is long lived – it may grow up to a certain size at the start of your application and for the rest of the runtime of your application remain at that size.

Well, anyways this is sort of a pedantic performance point but I am a bit of a pedantic programmer ^^, so thanks for responding!
I do certainly agree that the list based implementation is much easier to implement and understand than the ring buffer one!

Have a nice evening,
Karolin