DEV Community

Discussion on: 2021 JavaScript framework

Collapse
 
ryansolid profile image
Ryan Carniato • Edited

SolidJS author here. I wasn't really going to say anything as clearly the author can say whatever he wants to promote his library Fre and I have no issue with that. But reading the comments there is a lot of confusion and misunderstanding here.

Every single framework mentioned uses some sort of view transformation whether custom DSL or JSX. Solid isn't the only library to custom transform JSX. InfernoJS does too which clearly falls in category 1. Solid doesn't transform any code outside of the JSX.

JSX is a spec for XML in JavaScript: facebook.github.io/jsx/
No mention of HyperScript (the common compiled output). No even implied semantics. It's literally the first sentence:

JSX is an XML-like syntax extension to ECMAScript without any defined semantics.

So while you can build up some vision of how things should work in your mind, just know that it is something you decided and not inherent to the technology.

Similarly every library except maybe Prism has a runtime. Even Svelte. There is no contradiction here. All of these things are more or less the same. They compile and have a runtime. Which is why I give people leeway categorizing things to try to make their point because it's really hard to portray any of these frameworks as really different from one another. One could argue that Svelte can't be used without compilation, where as all the other libraries including Solid can. I mean you probably wouldn't use them that way but you could make that argument.

And I think that point is further cemented by the comments about Sinuous which is a fork of Solid's HyperScript(ie.. non-compiled) renderer. Anything said about Sinuous and Haptic apply to Solid. Clearly the author of that comment believes in the purity and simplicity of the Reactive model. That's what we have with Solid. It's not a messy contradiction or compromise.

The non-compiled exploration here has been going on for more than a decade. It's nothing new. I took the direction with Solid because after years of working on this it became evident that the ergonomics would never be acceptable as I watched Junior Devs over several years repeatedly hit the same footguns. I did so without changing the reactive mechanics but adjusting the API surface. Maybe we can chalk some of Solid's success so far as validating that position.

But I wouldn't be here if I was just satisfied with the status quo so keep on believing what you need to push things forward. It's a powerful thing. And I look forward to seeing what you all come up with.

Collapse
 
mindplay profile image
Rasmus Schultz

Solid doesn't transform any code outside of the JSX.

This is partially what I was hung up on - I was thrown off by some subtle differences between the counter example and the no compilation example:

One uses count to emit the value and the other uses count().

One does setCount(count() + 1) and the other does setCount(c => c+1).

I suspected these differences were due to some compile-time black magic, but it turns out, these are just different ways to do the same thing. (You might want to align these examples so they're more closely similar? it's difficult to judge the difference between compiled and non-compiled, when the differences aren't all relevant.)

I may have misjudged and gotten hung up on something that isn't there. 😅

I may have been wrong about the changing semantics as well - I think I've gotten so used to the semantics of the de-facto standard JSX transform that I had forgotten this wasn't part of JSX in itself... in my mind, JSX and the React transform, and the resulting semantics, was just a part of how JS worked - but it's obviously not.

So now I think I owe it a closer look. 😊

It's still not at all clear to me what the semantic meaning of JSX is with Solid's custom transform though - I still look at the compiled output code and struggle to understand what exactly a JSX expression means or is.

With the React transform, this is remarkably easy to understand after a quick look in the Babel REPL at some JSX output - whereas the output from the Solid transform is more like the compiled output of Svelte or something... I struggle to make a connection.

I'm not the sort of person who picks up things like this and happily accepts them as a "black box" that "just works" - I like things I can understand and explain.

Have you written anything about the transform? Or how would I go about trying to grasp the inner workings of this?

Thanks for you patience in explaining and discussing this, Ryan! 👍

Collapse
 
ryansolid profile image
Ryan Carniato

Yeah the counter with the c => c + 1 was just a style choice. Something React developers would find more familiar. It has the benefit of self referencing without creating a subscription (automatically untracks).

I've written a bit about the JSX transform: javascript.plainenglish.io/designi...
But often I suggest just looking at the Output tab from the REPL: playground.solidjs.com/.
While there is some optimization around grouping effects, event delegation, JavaScript ternary and boolean expression, and detection of static only expressions that don't need to be wrapped you can view the whole process basically like:

const d = <div onClick={increment}>{count()}</div>

// roughly compiles to:
const d = document.createElement("div");
d.addEventListener("click", increment);
createEffect(() => d.textContent = count());
Enter fullscreen mode Exit fullscreen mode

Mentally it creates real DOM elements and wraps expressions in Reactions(createEffect is like MobX autorun) to update the DOM. I like to think of it as roughly what I'd write if I wasn't using a template language. If you were to take the reactive system of your choice and try to create that element without tagged template literals what would you do?

This differs from something like Svelte which compiles your code into its internal component structure and distributes your code in class lifecycles. Solid system is just reactivity. The one arguable exception is the insert but it's written the way it is to avoid certain closures.. It really is:

function insert(el, fn) {
   createEffect(() => internalInsert(el, fn));
}
Enter fullscreen mode Exit fullscreen mode

The internalInsert is basically the closest equivalent to the HyperScript function except it is only concerned with insertion. What's cool about this approach is that outside spreads which (basically use the other part the HyperScript function) we know which attributes can change and just inline the effects in the compiled output. All little details but it both streamlines for performance and has this really nice benefit is most of end user code is included in a way that is easily traceable right in the compiled output.

In the end I just think of Components like factory functions where each dynamic expression closes over the state and then returns the native DOM element. This is basically the same as the HyperScript version except we shortcut all the checking of which attribute has changed.

Hopefully that makes sense.

Collapse
 
132 profile image
Yisar

You still haven't explained the inconsistency between the semantics of fine-grained updates and react APIs (hooks). Unless you use an API that doesn't work like react, it's hard to avoid semantic distortion

For example, the following paragraph is more semantic

function Component () {
  let count = useState(0) // will not rerender
  return () => <div>{count()}</div> // will rerender
}
Enter fullscreen mode Exit fullscreen mode

You use APIs similar to react, but you also use fine-grained semantics, which is distorted in itself.

Sometimes I like fine-grained update model, but its semantics should not be distorted.

Collapse
 
ryansolid profile image
Ryan Carniato • Edited

Want the opposite argument? I will play sort of Devil's advocate. I don't fully believe this and I know the answer to the questions I will present but I want you to put this in the perspective of someone who has been using JSX this way years before Hooks ever existed. And has been using fine-grained reactivity longer than React has existed.


Hooks are the distortion. Why would I ever expect a function that executes over and over to retain state in a variable declaration? Why should I be aware of stale closures or the order I write all things in something that runs over and over? How is that the baseline?

Whereas HTML a XML dialect defines declarative interface that updates as attribute values do. Is it so unnatural to believe that a different XML DSL would do the same? Which is more distorted?


My point is you have arbitrarily decided what is the distortion and what is not. We actually are just seeing the exact opposite thing sitting on the same space. Your semantic example isn't fine-grained. It's more granular than that. It happens at the binding level. Whose to say what the meaning of { } is in relation to the static XML parts of the JSX?

Now I understand React is popular and others have copied it so there is weight here. But that's just a sort of preconditioned bias and not inherent to the technology. I have my biases as well. But let's call them out for what they are.

Thread Thread
 
132 profile image
Yisar

I can accept the appropriate distortion of JS semantics, because the idea of compilation itself can only be like this. I didn't defend fre. In fact, in addition to fre, I wrote other frameworks. They all have each other's scenes, but they also have defects.

Thread Thread
 
mindplay profile image
Rasmus Schultz • Edited

@ryansolid I've been thinking about this for a while now...

Hooks are the distortion. Why would I ever expect a function that executes over and over to retain state in a variable declaration

On this we agree. This is what I disliked about hooks from the start. While they were in fact implemented in plain JS, they use global state (and Dark Magic) to make functions work in very unexpected ways.

When a library requires a special linter to tell people when valid JavaScript is not valid, you know they've gone too far with the "cleverness". It's impressively clever - in the worst possible sense of both words.

On this we agree.

Whose to say what the meaning of { } is in relation to the static XML parts of the JSX?

Actually, the spec is pretty clear on this point.

The Syntax section clearly states:

JSX extends the PrimaryExpression in the ECMAScript 6th Edition (ECMA-262) grammar

So this is not a stand-alone syntax - it is an extension to the ECMA-262 specification, building upon it's grammar.

You can see this in the actual grammar, where all the new syntax elements are prefixed with JSX - while the stuff in curly braces (and elsewhere) references AssignmentExpression, an element of the ECMA-262 specification.

While the JSX syntax extension does not have any defined semantics, the cut-off point for that is the stuff in curly braces. The ECMA-262 AssignmentExpression element has very clear and detailed semantics attached to it.

It's still not clear to me if Solid changes these semantics - my impression is that it does, in some subtle ways? Something to do with how it handles observables at compile-time? How would the compiler would even know at compile-time what is or is not an observable, given that JavaScript has no static type information? Does it use some sort of inference? That would seem a bit risky or fragile. What happens when it can't capture the type information it needs?

It makes me uneasy - for the same reasons hooks make me uneasy.

Thread Thread
 
ryansolid profile image
Ryan Carniato

Yeah Solid doesn't even have an isObservable. All we do is wrap things in functions. If it is a function it could be reactive so call it in a computation.. Otherwise just execute the expression. I know that if there are no member expressions or no call expressions it doesn't need to be wrapped. For components I do the same thing except instead of using computation I literally assign it to the props objects, and things that need to be wrapped are put in a getter.

This all seems like it would add some overhead but because we group attribute reactivity in templates as long as something is reactive you haven't created anything extra that has meaningful impact. It also isn't fragile since it is deoptimized by default. It will always work, just often it can work better. And this somehow is still performant enough for the common cases. The worst case is you create a couple computations that only run once and never update.

In essence we don't generally even really rewrite the JS expressions passed in to the JSX. We just choose whether or not to wrap them in functions based on a very simple heuristic. So all the JavaScript expressions execute the way they are written. The only exception of that is handling ternary/boolean operators that branch to things that would be wrapped. We do independently wrap the condition in those cases in a computation to prevent repeated execution. I honestly wasn't intending to go this way but since it is completely analyzable and people like this shorthand (instead of using our <Show> component) it was a reasonable addition.