🥳 TL;DR React State Tearing, Eventual Consistency and Time Traveling 🤓
One day I used to be a junior developer, having many problems with no solution for them. I used to struggle with many strange things with no explanation for them.
I learned how to overcome challenges and obstacles. I understood how to solve and avoid them in advance. I was studying algorithms and patterns in order to make things run smoothly, making my output more predictable year by year.
Ages later I had taken a plunge into React, and I was amazed how it simplified everything: bugs disappeared, everything performed well! How to make it more simple and easy to handle? That was the only question left.
These days are now left behind.
I have just cleared up that in a week with hooks I had created and solved more problems than in a whole year prior.
Once again I am a Junior Developer. Once again I am facing issues with no explanation for. I have to and I am going to explore new patterns to handle upcoming problems.
Join me at my journey.
A detective tragedy in 3 parts
1. DejaVu and time tearing
Déjà vu is the feeling that one has lived through the present situation before. The phrase translates literally as "already seen".
One day a few different people met in one issue. They had a great conversation about the future concurrent rendering, which would drive the development of React-Redux v6 later.
The main problem was “tearing” - different time slices coexistence in one render(output). Some component might see the New State
, while others might still see the Old
. You, as a User
, will see both.
It was just a theoretical issue, "insignificance" of which was confirmed by a React team(after React-redux v6 failure). However here is an example which might prove it.
Anyway, the main point there is that a year ago it used to be a theoretical issue, which could be faced far ahead when React would become asynchronous concurrent.
Although, react is still synchronous, we had got a problem, it was brought not by that asynchronicity, but by hooks and closures - functional scopes we love javascript for.
It's common now, that the state is always partially in the past.
There was no such thing as the "past" with Class based components - there was the only one this
, and nothing else. And this
always represents the "present".
With hooks, well...
When you are do
onClick
- it sees variables from the local -functional scope. From the "past" scope - onlyrefs
represents the present.When you are declare
effect
there is no "past" - only the present. As a result, you don't know when some effect might trigger. "Past" and "Present" dependencies would be compared inside React.When you are run
effect
- it's already onetime tick
in the past. Something might have already been changed, but not foreffect
- it is frozen in time.When you are running
multiple effects
- they might affect each other, causing cascade and repetitive updates. Until they all are not finished - there is nopast
and there is nopresent
- it's mixed, as long as every hook works by its own.
In RxJS world it's called glitches
- temporary inconsistencies emitted by Observables - and they are not considered as a problem.
Glitches
in React are also more about features than bugs. However, they are at least a big performance problem.
Let's create a few examples
Event propagation
Achilles and the tortoise paradox
To get started, let's pick a simple problem to deal with - event propagation speed
. The problem is easy to reproduce, and you might have already had this one... in case you have more than one state management system.
- Every event delivery system works by its own
- Perhaps, you have at least two.
Let's imagine a pretty standard case - React, React-Router, React-Router-Redux and Redux.
Let's imagine you are changing the location. What would happen then?
-
location
changes -
history
updates -
react-router-redux
dispatches an update to reduxstore
-
dispatch
happens out of React cycle, so State is updated synchronously, and allconnected
components are triggered - some components are updated. However,
withRouter
/useRouter
are reading data from theContext
, which is 👉not yet updated👈. - 🤷♂️ (your application is partially in both the past and the future)
-
history
update calls the next listener and we continue -
Router
is updated -
Context
is updated -
withRouter
components are triggered by Context update - some components are updated, ultimately with proper values.
So, you did nothing wrong, but got a double render by mixing states with different event propagation speed as a result.
Good news - React-Redux v7 has solved this problem. it just uses the same Context as Redux-Router, resulting in the same"event propagation speed". However, any other state management, especially with a custom subscription model, might not solve the problem (yet).
There is a question. What would happen in case another render dispatched some events changing the state again?
Well, "Achilles, the Tortoise", and you will get more wasted renders.
However, you might think that this is not your problem. I wouldn’t go along with it. Let's have a look on the same(!) problem from a different perspective.
State synchronization
Have you heard about CAP theorem? The simplest possible description of it - there is no way to create the ideal state management.
The Ideal State
consist of:
-
Consistency
: everyread
reads the "true" value -
Availability
: everyread
or everywrite
does the job -
Partition tolerance
: just keeps working as a whole when different parts are not alive.
We don't have any problems concerning Availability
with any clientside state management. Still, we do have problems with Consistency
and Partition tolerance
. It doesn't matter what you are going to write, or just written - as long as the write
would be performed in the future
there is no "read" command. You have only what you already have in local closure, and that's "the past".
And I do have a good example for you:
- let's imagine you have some search results
- the incoming prop is a
search term
- you store the
current page
in thelocal state
- and load
search-term
+current page
if they have not been loaded before
code is always the best way to explain
const SearchResults = ({searchTerm}) => {
const [page, setPage] = useState(0);
useEffect(
// load data
() => loadIfNotLoaded(searchTerm, page),
// It depends on these variables
[page, searchTerm]
);
return "some render";
}
Is everything all right? Definitely is, except a single point. Probably you shall reset page
on a term
update. It should work that way for a "new" search - start from the beginning.
const SearchResults = ({searchTerm}) => {
const [page, setPage] = useState(0);
useEffect(
// load data
() => loadIfNotLoaded(searchTerm, page),
// It depends on these variables
[page, searchTerm]
);
+ // reset page on `term` update
+ useEffect(
+ () => setPage(0),
+ [searchTerm]
+ );
return "some render";
}
So, what will happen when you update searchTerm
?
- 🖼 the component is rendering
- 🧠the first effect would be set to trigger, as long as
searchTerm
has changed - 🧠the second effect would be set to trigger, as long as
searchTerm
has changed - 🎬the first effect triggers loading new
searchTerm
and oldpage
- it was old when this effect was created. - 🎬the second effect triggers
setPage(0)
- 🖼 the component renders
- 🧠the first effect would be set to trigger, as long as
page
has changed - 🖼 the component renders with the right state
- 🎬 the first effect triggers again loading new
searchTerm
and newpage
- 🖼 the component renders with the right search results, once they would be loaded.
So - one change to props, 3 or 4 🖼 renderings of a component, 2 data fetches, one of which is incorrect - with new searchTerm
and old page
. Table flip!
Play with it:
This is the same Achilles and the Tortoise
case, when one update (page) was trying to reach the other (searchTerm), but the other is also moving.
Everything is broken. We went a few years back in time.
NOT FUNNY, probably there was a good reason to use Redux. And well we all were asked to use Redux as long as it is "correct" and helps get shit done "right".
Today we told not to use it, but due to other reason. Like it's too global.
Long story short - there are 2 ways to solve our problem.
1. KILL IT WITH FIRE
Or set the key
to remount component, and reset it to the "right" values
<SearchResults searchTerm={value} key={value} />
I would say - this is the worst advice ever, as long as you are going to lose everything - local state, rendered DOM, everything. However, there is a way how to make it better, using theoretically the same key
principle
const SearchResults = ({ searchTerm }) => {
const [page, setPage] = useState(0);
const [key, setKey] = useState(null/*null is an object*/);
useEffect(
() => {
if (key) {// to skip the first render
console.log("loading", { page, searchTerm });
}
},
[key] // depend only on the "key"
);
// reset page on `term` update
useEffect(() => {
setPage(0);
console.log("changing page to 0");
}, [searchTerm]);
useEffect(() => {
setKey({});
// we are just triggering other effect from this one
}, [page, searchTerm]);
This time our loading
sideEffect would be called once, even with the "right" values provided.
- page and search term set
- first useEffect does nothing, key is not set
- second useEffect does nothing (page is 0)
- third useEffect changes key
- first useEffect loads the data
- ...
-
searchTerm
orpage
updated - first useEffect not triggered
- second useEffect might update
page
to 0 - third useEffect updates key
- 👉 first useEffect loads the data when everything is "stable"
From some point of view - we are just shifting effect in time...
2. Move to the past
Just accept the game rules, and make them play on your side
const SearchResults = ({searchTerm}) => {
// ⬇️ mirror search term ⬇️
const [usedSearchTerm, setSeachTerm ] = useState(searchTerm);
const [page, setPage] = useState(0);
// reset page on `term` update
useEffect(
() => setPage(0),
[searchTerm]
);
// propagare search term update
useEffect(
() => setSeachTerm(searchTerm),
[searchTerm]
);
useEffect(
// load data
() => loadIfNotLoaded(usedSearchTerm, page),
// It depends on these variables
// and they are in sync now
[page, usedSearchTerm]
);
return "some render";
}
- changing
searchTerm
first updatespage
andusedSearchTerm
- changing
usedSearchTerm
andpage
loads the data. And these variables are updated simultaneously now.
The case is closed? Well, no - this pattern is not applicable if you have many variables. Let's try to understand the root problem:
To PUSH or to PULL?
Another name of this problem is a Diamond Problem
, which is also bound to Push or Pull variant of state update propagation.
- on
PUSH
every update "informs"consumers
about the change. So once something is changed - theconsumer
would be notified about the exact change. This is how hooks works. - on
PULL
everyconsumer
got notified about "The Change", and then they have topull
update from a store. This is how redux works.
Problem with PULL
- no "exact change" notification, every consumer has to pull
by its own. This is why you have to use memoization and libraries like reselect.
Problem with PUSH
- if there is more than one change - consumer
might be called more than one time, causing temporary inconsistencies as well as DejaVu.
Here is a good diagram from a State Manager Expert™(and creator of reatom) - @artalar
This a cost caclulator
, with a cascade update caused by a PUSH pattern. Let's reimplement it with hooks:
const PriceDisplay = ({cost}) => {
const [tax, setTax] = useState(0);
const [price, setPrice] = useState(0);
// update tax on cost change
useEffect(() => setTax(cost*0.1), [cost]); // 10% tax
// update price - cost + tax
useEffect(() => setPrice(tax + cost), [cost, tax]);
return `total: ${price}`;
}
- once
cost
is updated - we updatetax
andprice
- once
tax
is updated - we updateprice
-
price
got updated twice, as well as this component, and probable some components below it was also updated. - in other words -
price
is "too fast"
this was PUSH, and now let's rewrite it with PULL.
const PriceDisplay = ({cost}) => {
const tax = useMemo(() => cost * 0.1, [cost]); // 10% tax
const price = useMemo(() => tax + cost, [tax, cost]);
return `total: ${price}`;
}
- actually, this is not a PULL, this is a real waterfall, but...
- 🤔...🥳!!
You will not believe by ⬆️this⬆️ is the core of this article
Caching versus Memoization - we are deriving data, one from each other, in a synchronous way, which is a PULL pattern, and the result is free from the problems above.
However, there is a problem - exactly this example solves the problem for the calculator example, but not for our paginated search
.
However, ... let's try to solve it yet again
const useSynchronizedState = (initialValue, deps) => {
const [value, setValue] = useState(initialValue);
const refKey = useRef({});
// reset on deps change
useEffect(() => {
setValue(0);
}, deps);
// using `useMemo` to track deps update
const key = useMemo(() => ({}), deps);
// we are in the "right" state (deps not changed)
if (refKey.current === key) {
return [value, setValue];
} else {
refKey.current = key;
// we are in the "temporary"(updating) state
// return an initial(old) value instead of a real
return [initialValue, setValue];
}
};
const SearchResults = ({ searchTerm }) => {
const [page, setPage] = useSynchronizedState(0, [searchTerm]);
useEffect(
() => {
console.log("loading", { page, searchTerm });
},
[page, searchTerm]
);
Here is "fixed" code sandbox - https://codesandbox.io/s/hook-state-tearing-dh0us
It's not fixed. It's shifted - we have normalized state update propagation speed - values are updating in the same time.
Yet another way
Yet another way to solve this problem - is to change the way we dispatch the "side effect".
Speaking in terms of redux-saga - then "the State" dispatched multiple events you might takeLatest
, ignoring the firsts, or takeLeading
, ignoring the following.
You might also know this as debounce. I prefer to call this as Event Horizons
, or event propagation boundaries.
Any (any!) example here could be "fixed" by delaying the loading effect
, and actually executing only the last, the "stable", the correct one.
This is a very important concept, allowing batching requests and various optimizations - just accept that any asynchronous call can take a time, especially network requests. If you delay them by a few milliseconds or even CPU(or nodejs process ticks)... everything could become a little bit better.
However, keep in mind - adding any timeouts and other asynchronously to your code is making it harder.
As a conclusion
1. So, yet again - which hooks we do have?
-
useState
state is derived from props, only during the first render -
useMemo
other values are derived from state and props -
useEffect
some variations of props and state are reflected back to the state.
ANY
useEffect
might cause glitches.
2. React is a subject for glitches
With different hooks updating independently you may, and will get temporary inconsistencies within a single component, leading to the (temporary)undefined behaviour or even (temporary) broken state.
The problem is bound to hooks, as long as you have to render a component to the very end, and could not "bail-out" if some useEffect
is supposed to synchronize states.
The problem is bound to the Caching
and Memoization
, which are differently affected by the CAP Theorem
- only memoization would not cause tearing.
glitches
are causing re-renders, and slowing your App
3. Use Class Components to handle complex state situations.
(surprise!) ClassComponents have got componentDidUpdate
as well as getDerivedStateFromProps
making complex state updates more handy. You just able to update them as one thing, without extra re-renders.
Please don't forget about classes - they could help a lot!
4. Use external state (like Redux)
Redux is PULL, Redux is doing many small state updates as a response to a single dispatch, Redux could batch many state updates in one, resulting a single React render, making broken states
impossible.
Redux is awesome. And, let me be clear, less fragile
5. Be aware of the problem
Just don't "trust" any single solution. I was quite pathetic in my attempts to solve some state problems with hooks, until I've accepted - there no is such thing as an ideal tool.
Hooks are not ideal
6. And it might be not a problem at all.
Yes. It's not a problem almost always. You might never face the terrible stories I've told you above.
Hooks are awesome!
... but, let's face the truth - state management is, and always will be a very complicated beast...
If you agree, as well as if you disagree with this - here is attempt to "document" all edge cases for different state management systems:
WORK IN PROGRESS see tests
This repo is intention to describe and formalize state-management edge-cases.
Top comments (3)
Amazing article! I was just curious, does that apply to svelte stores & reactive declarations?
So yes, and no.
Here is the same example reimplemented in svelte:
Right now it is working - you update searchTerm, page set to 0, data is loaded.
But once you move
$: if ([searchTerm]) {
below$: if ([page, searchTerm]) {
it would not - you update searchTerm, data is loaded, page set to 0.So Svelte is sensitive to the order of events, while react hooks are not.
It's not bad or good - it's different.
Thanks a lot!