I've always thought data structures are cool, but you know what's cooler? Seeing them in the wild!
While going through Elixir's doc, I saw that El...
For further actions, you may consider blocking this person and/or reporting abuse
Hi, Edison! Thank you for your really interesting and useful articles!
One note regarding this article: this link
leads to dev homepage instead of this one :) dev.to/edisonywh/recursion-tail-ca...
Thanks for point out! Bu that's really weird, because I remember deleting that sentence altogether, and I can't see it now anywhere? Not sure how you saw the stale version!
Probably, it was a cache of the service worker. Now actually I see the article without those sentence.
Hello!
I don't understand "Adding to list has low memory cost, while retaining the immutability that we desire."
I have a list A -> B -> C
I want to add D to the list end.
next(C) == nil?
I expect that next(C) should be D.
I need to mutate C, so I must copy it.
But if C was changed then I need to change B...
So I get full list copy then I add element to the list end.
Looks like this is suboptimal algorithm.
Can you please explain how it really works?
Good point! Yes you're right, I should've clarified by adding I in fact meant prepending.
In the end, it's just a normal linked lists that Elixir uses under the hood, so it inherits the same appending Big O for time. What you're asking is a little more, which is the fact that we have to maintain immutability for structural sharing, so yes you're also right in that regard - we'd need to recopy the entire linked lists and then only append it.
If we look back on my structural sharing example:
we have 3 lists, but three of them point to the same existing list (structural sharing), but if I choose to append instead:
Although
list_three
has the same values[5,6,7,8]
, it does not point to the same nodes inlist_one
andlist_two
. I hope that makes sense!There's this another really good article that even comes with Benchmark that shows this in action :)
You can also see it on the official language guide here too
Just to add to that, some functional languages implement a “copy on write” mechanism that only copies the content of the list when mutated.
I'd love to hear about how that is implemented contemporarily, I'd really appreciate if you gave me a poke if you find/write a nice article about that.
AFAIK: In statically typed languages where the linked list is part of core/stdlib,
(as opposed to some kind of JS where you implement it as nested length-2 arrays (
[4,[3,[2,[1]]]]
))they are actually stored as arrays of significant size with pointer to next slice.
(And previous, as since the data structure is already quite large, they are often a doubly-linked list. What's another pointer?)
This makes them still more expensive to index into than a contiguous/dense array but makes both indexing AND iteration several orders of magnitude faster.
Pretty much the same thing that turned binary trees into massively wide trees used today.
There's more complexity, especially since this cancels some size optimizations of structural sharing, but that's what the compiler is for.
Interesting article! FYI "data" is plural, so "... data are immutable" instead of "... datas are immutable" (colloquially "... data is immutable" is also acceptable).
Cool, thanks for pointing out :D
Sure, the baby is reusable, but how many times is a baby really reused, in most contexts?
I don't know, don't you share your babies with other people? Seems really efficient that way!
Looks like I've just hit a jackpot billion dollar ideas, co-founder anyone?