DEV Community

Luke Barnard
Luke Barnard

Posted on

Flux, Redux and the Derivation of Mental Model Equivalence

Update: this is my second attempt at posting, the last attempt was botched.

Today I had interesting discussion with my coworker over a new bit of UI that we're currently implementing in Riot, an app that I've been helping to build for almost a year.

The UI in question is built in React and backed by a Flux store so I'll be using React, Flux and Redux terminology throughout.

The discussion went along the lines of:

  • "This View is highly coupled with the Store, surely the View should contain the logic and state it requires to operate and send Actions to the Store based on that"

vs.

  • "but in Flux, you're supposed to send intentions to the Store, not state - Actions should represent what interactions the user is making with the app".

This got us thinking about the practicalities of what sort of state should be managed by Views and Stores.

To help us reason through our existential crisis, we went through a few examples.

Example - The Checkbox

One example was the humble checkbox:

Consider a checkbox that can be displayed in one of two states—checked and unchecked. The checkbox is backed by a Store that can also be in one of two states (that is to say, the checkbox listens to "update" events broadcast by the Store that—when received—update the checkbox to reflect the state of the Store). For our purposes, we don't consider the checkbox to be controlled by anything other than the Store (i.e. it's not controlled by native UI handling).

There are two ways in which to implement this system:

  1. When clicked, the checkbox sends an Action of type "toggle_checkbox". The Store will receive this and flip the single bit that it laboriously guards (very smugly, to be honest). The Store then broadcasts an update and the checkbox is brought into alignment with the state of the Store. Or,
  2. when clicked, the checkbox sends either an Action of type "checkbox_on" or "checkbox_off", depending on the current state it is displaying.

The UX Perspective

Notice the emphasis on the word "displaying". In UX design, we aim to match the user's expectations as closely as possible. One could argue that the user's expectations should be considered to be derived from the current View and their Actions. This would suggest that Option 2. is the better option as the Actions themselves are derived from the View shown to the user.

With all of this in mind, we can construct a crude set of equations to describe a user's ideal interaction with the system that help us reason a bit further. For brevity, I will only mention the result of derivation. For a full "proof", see the end of the article.

 render(Reduce(CurrentState, Action)) = reason(UserReduce(UserState, Action))
Enter fullscreen mode Exit fullscreen mode

Derived Principles

The equation above is a crude formalisation of why the following should be adhered to in UI/Flux/Redux development:

  • An Action should represent an act taken on the user's mental model of the system.
  • The Reduce function should represent how the user's mental model is operated on by the user's Actions.
  • The CurrentState should represent the user's mental model.
  • The method of rendering the current State of the app should be "reasonable". Or rather, the reasoning with which we render the current State should closely mimic the reasoning that the user applies to visualise their expected View.

So what on EARTH has this got to do with our two options for implementing the humble checkbox?

Well with both options the Actions being sent arguably represent the acts taken by the user to manipulate their own mental model. However, the second option takes into account the View that the user is currently looking at, giving control of the Store state to the View. What are the practical implications of this?

Cycling Practicalities

As simple as this system is, there are practical implications in choosing either option. Namely in the scenario in which two of the same Actions are fired in quick succession. We assume that in this situation the user expects the two clicks to effectively cancel each other out. In each scenario, the following could occur:

  1. Two "toggle_checkbox" Actions are dispatched and the Store responds by making two calls to set its internal state. The Store will infer new state by using its current state, which results in the bit flipping to 1 and then back to 0. In theory, these updates could have been displayed in quick succession to the user.
  2. Two "checkbox_on" Actions are dispatched and the Store responds by setting its internal state bit to 1, twice. This is (in theory) before the View is updated, after which it will send "checkbox_off" Actions when clicked.

Option 2. clearly shows a situation where the user's expectations will not be met, despite the Actions seemingly representing the user's manipulation of their mental model. But is this strictly true?

Is the user's reasoning of how a checkbox behaves the same as the reduction of state computed in option 2.? Yes, it is. If someone tells you to turn a light on and then again after you've turned it on, you're probably not going to turn it off (but interestingly, you'd probably question that someone's mental... model).

So if the Store is fine is the View to blame? Yes. The View in option 2. controls the Store and inverts the data flow by setting the Store state. It does this by using it's own state that is potentially out-of-date, establishing itself as an (evil) source of truth.

Back to The UX Perspective

So did we miss something when we applied our fundamentals of UX design? Didn't we say that option 2. was better because it considers the current View that a user is seeing?

To put it another way, the Actions were derived not from the user alone, but from an assumption made about how the user operates. In option 2. We assumed that updating the state of the store would update the internal state of the user, and that the internal state of the user was not being "reasoned" at the point that the user clicked again.

What we didn't take into consideration when designing option 2. was that the expectations of the user extend way beyond a single action. This is why it is so important to not derive Actions from the View because by doing so we assume that the View matches the user's mental model. We can only hope that our Derived Principles hold; we should not assume that we can short-circuit them.

With that, I leave you with some equations that might make sense.

Derivation of the Mental Model Equivalence (tm)

We use Redux to derive the following:

 View = render(CurrentState)

 NextState = Reduce(CurrentState, Action)

 NextView = render(Reduce(CurrentState, Action))
Enter fullscreen mode Exit fullscreen mode

And in an ideal world, we hope that the user's expected View matches the View that we are about to render:

 NextView = render(Reduce(CurrentState, Action)) === UserExpectedNextView
Enter fullscreen mode Exit fullscreen mode

One way to consider the user's View is a kind of mental "rendering", more commonly known as Reasoning:

 UserView = reason(UserState)
Enter fullscreen mode Exit fullscreen mode

, where UserState is the user's current belief as to what the current system State is.

It follows that,

 UserExpectedNextView = reason(UserExpectedNextState)
Enter fullscreen mode Exit fullscreen mode

One could suggest that the next View that the user expects is also a function of the current View and the Action that the user takes:

 UserExpectedNextView = user(View, Action)
Enter fullscreen mode Exit fullscreen mode

and therefore ideally,

 render(Reduce(CurrentState, Action)) = user(View, Action) = reason(UserExpectedNextState)
Enter fullscreen mode Exit fullscreen mode

and actually, the user has some idea as to how the State should change too:

 UserExpectedNextState = UserReduce(UserState, Action)
Enter fullscreen mode Exit fullscreen mode

so

 render(Reduce(CurrentState, Action)) = reason(UserReduce(UserState, Action))
Enter fullscreen mode Exit fullscreen mode

Top comments (0)