We unavoidingly tap into function purity, point free syle, recursion, immutability etc.. when discussing functional programming. You may not necessarily practice all aspects of functional programming in your run-of-the-mill job but if you're someone who works extensively with JavaScript libraries like RxJs, Cycle or state management tools like Flux (Redux, Vuex), I am sure you'd come across immutable objects more often than anything else functional. Immutability infact is so crucial to the reactive world of programming that you can count it into the basics of it. We are not going to talk about strings and other primitives in JavaScript which by design are always immutable.
For a library that's reactive, it needs to preserve the state throughout the execution of the program, why? How else would you detect change in the state? Think of it like this, given that JS objects are ephemeral (non persistent), once you modify some property its value gets changed, the object being the same. If you compare the object before modifcation to the one after modification, well they are the same. Obviously you know why, modifying a property wont generate a new object! To understand this I expect you know that a variable holding an object in JavaScript actually holds the reference to the memory block where the object's properties are stored as key value pairs. Now you may say that you can detect a change by employing a recursive comparison on the data ? Not a performant idea when your state keeps changing every now and then! Immutability suggests shallow copying the object and making new modifications on the new copy of the object. Thinking of the copying step as a signal that something changed in the state, wont be wrong. Now that's a much faster and performance compliant way to tell whether or not the state changed. That may also trigger another doubt, how do you believe that making copies of your state is more performant than a recursive check on the property which changed? Well, that's a good question. I will try to catch up with this towards the end of this post, for now I'd say that there's something called structural sharing that makes this possible.
Try on Codepen
In essence immutability has the following benefits
1- Reactivity through change tracking - We already discussed this. Using immutable state can make identifying changes quick and effortless both for the machine and us developers. This is what tools like redux, vuex or even parts of react and vue themselves build their reactivity upon. As soon as something in the state changes, be it based on some asynchronous background activity or a result of user interaction with the UI, a reference equality check instantly signals that it may be the correct time to rerender.
2- Predictability and better debugging - Predictability is very frequently linked with function purity. Given a function which does not cause any side effect within itself, the ouput will always be the same for the same set of inputs no matter how many times you call the function. With this restriction that no function can modify the shared state, we now have tools like Vuex and Redux that let you modify the state but in a way that fulfils their criteria. For example, you can only make changes to the Vuex store through functions listed as mutations in the store. You also have access to methods like Vue.set() & Vue.delete() to register your changes immutably. This makes debugging more easy and outputs/errors more predictable.
3- Versioning - Isn't it obvious that if you can preserve states you can go back and look at the old ones whenever needed? Quite similar to how you still have access to your old piece of code in Git even after merging several times on top of that. Redux implements a feature they call "action replay", wherein you can see the state change and the user interaction side by side in the browser. You think its helpful? Ofcourse! cool and helpful. Now you know how important it is to preserve the state.
4- Performance - I took this as the last thing only because I did not talk about structural sharing when we were discussing performance. You may still be wondering how would creating new objects for every simple change be more performance complaiant than a deep equality check on the objects. While talking about immutability I also used the term shallow copy, that should have given out some hint. If not, its still nothing to worry about. As easy as it is, when making copies its important to be aware that the object you're copying may have nested objects as values to its properties. We shallow copy (just copy the reference without creating a new object) those objects which are not to be changed and only deep clone the nested object that actually needs to be changed. That's what we call structure sharing between 2 objects. You share the entire structure by internal references and only re create the node that needs modification. This may take an example for you to wrap your head around it.
Try on Codepen
You see its not that difficult but only until it reaches like thousands of properties in an object and then when you need to modify some very very deeply nested object, it sure will break your fingers. If that wasn't enough trouble, a thought of mistakenly altering some other nested object may start bothering you. To avoid hassle when dealing with large chunks of objects, you may opt for libraries like immutable.js or immer. I would highly recommed this article by Yehonathan if you'd like to learn more about structural sharing. If you'd like to explore more on functional programming, give this a read to understand recursion from my point of view.β
Top comments (0)