A couple of years ago in the The Real Cost of UI Components, I explored the cost of components in JavaScript frameworks. I asked whether components...
For further actions, you may consider blocking this person and/or reporting abuse
The thing is, most abstractions come with some overhead... And we still use them happily because they are also useful. So I don't see the focus on overhead or performance as particularly interesting in general. But modularity, separation of concerns, cohesion/coupling, declarativeness -- that is the kind of things I think we should think about much more. That would be worth in and of itself a series of articles.
Long story short, components are not going anywhere because modularity remains a necessity for any codebase of reasonable size. What exactly is a component may vary, but the idea that you split a big things into small things because the big thing is too big for certain purposes, that is not going away.
Modules being separate, standalone units, they facilitate reusability in miscellaneous contexts, which then positively impacts maintainability, as you mention.
Modularity also relates to composition because the small things must be factored back into the big thing. A good modularity story must go hand in hand with a good composition story.
So the interesting question for me is how to modularize programs or applications.
Talking about UI frameworks, I noticed that:
In other words, modularization of web applications is more often than not suboptimal and instead of the spagetthi of imperative programming, we have the spagetthi due to many components that interact with each other in obscure ways through undeclared dependencies on external data. I discuss that at length in my Framework-free at last post.
To be real modules, components should be as independent and interchangeable as ESM modules are. That in particular means that they should have an interface that allows predicting the entirety of their computation, so the program that uses the component need not depend on its implementation details and reciprocally the component needs not know a thing about the program that uses it.
So the future is not component-less in the least. In the same way, the fact that ESM modules can be bundled into a single file does not mean that ESM modules are unnecessary overhead. But we may indeed be interested in better ways to modularize our code, that is, a better componentization story that we have as of now, because a lot of what we call components are not actual modules, which seriously complicates, as we know, the composition story.
So I am thinking let;s see how the story continues and how you will address modularity in whatever it is that you propose.
For those interested in modularity, coupling and cohesion: http://cv.znu.ac.ir/afsharchim/T&M/coupling.pdf
There is always an interesting opinion coming from funtional-programming based community, that is, a stubborn ignorance of performance.
Yes abstractions always bring overhead, but there is a thing called "zero-cost abstraction", in C++ and Rust community. These costs should be paid more on compile-time rather than run-time.
What Ryan is trying to say here is simply this:
For React with vdom, components are cheap during runtime, so we can keep components in runtime for those vdom-based frameworks;
For non-vdom-based frameworks like Solid and Svelt, runtime component interface comes with detectable cost, so we keep components only in pre-compile-time, and eliminate them during compiling, so they vanish in runtime.
This is surly a legitimate argument, taking a little more to compile, achieving better runtime performance, and no harm for modularity, decoupling etc. Very close to "zero-cost abstraction".
Quoting from a previous reply:
Second line of though: functional UI also ignores components. In fact, Elm has long recommended staying away from arbitrarily putting things in components a-la-React, not because of some artificial FP religious values, but simply because they have found better patterns (that is patterns with better tradeoffs).
Yes by far JS is the most widely used FP-flavored (partly) language in the practical world thanks to the hard and dirty work by v8 and other engine teams who take performance as a major pursuit rather than ignoring it.
Similarly the react core team does
all the complex and dirty work inside the framework so that we can enjoy the neat f(state)->ui pattern. And yes they are trying all their best to improve performance.
Yes that is why we have rust and wasm now and they may bring great changes in the near future.
That is the point. In EE we have a concept called gain-bandwidth-product. For a given circuit pattern, the product remains a constant. Increasing gain will harm bandwidth and vice versa. It seems much like the argument that pursuing better performance and less overhead will harm modalarity and neatness. When we have a fixed performance-neatness-product, say, 12, do we choose 2 for performance and 6 for neatness or 4 for performance and 3 for neatness? That is what tradeoff means.
But that is only the beginning of the story. In fact human beings are developing new circuit patterns, inventing new designs, exploring new materials, to achieve better product. The same here. We cannot say vannila JS and react and vue and solid share the exact same performance-neatness-prouct so the only thing that matters is some kind of tradoff. Not true. Framework authors are trying to push the product to higher level. Ryan in this article is trying to point out something that can improve performance without harming neatness. In fact his work can be seamlessly used in xstate or raj or kingly, all tools you mentioned in the functional ui articles. That is pure progress. That is what you called better patterns bringing better tradeoffs.
Application-level engineers like us are mainly accepting a given performance-neatnes-product determined by our infrastructure and making tradeoffs within it. But infrastructure-level engineers like framework authors like Ryan have higher duty to enhance the product for all good.
I feel like this is slowly driving out of topic. The title of this piece is components are pure overhead. Assertion that I reject as far-lacking in nuance. Then "The Future is Component-less" I also reject because once again we have a framework author busy evangelizing his particular vision of the future through gratuitous dramatic click-baity formulas. As much as I like discussing programming topics, and god knows a lot of topics are worth discussing (modularization being a very important one), this kind of gross, ill-founded generalization irks me to no end and takes me away from actually spending my time addressing them.
Regarding performance improvement of libraries, framework, compilers, etc. hats off to all those who are bringing this out. I am glad that they found their calling and that their audience can benefit from their efforts. They generate options and enlarge the solution space. I do reiterate however that performance is just one variable among others and that architects and tech leads need to take an holistic view when making decisions.
I do get the point that you can compile away "components" under some circumstances -- that works for any abstraction (Kingly for instance compiles away its state machines). I do get the point that removing the necessity to create components for other reasons than the benefits of modularity actually frees the design space for the developer. All of that is good. Whether all that actually will be worth pussuing in your specific application under development/team/constraint context is another question. Your mileage will vary.
I think that this position is informed by past experience on the back end and on desktop.
As a result future average device single thread performance and connection quality could be trending downwards under many circumstances.
Aside: The Mobile Performance Inequality Gap, 2021.
The headroom necessary to accommodate the overhead of microfrontends may exist over corporate backbones - not so on public mobile wide area networks. So in many ways a lean perspective, much like in the embedded space, is beneficial in web application development.
Most of React's optimizations in the last few years were geared towards getting the most out of the client's single (main) thread performance in order to preserve the "perceived developer productivity" of the Lumpers component model where "React is the Application/Architecture" to forestall the need to adopt an off-the-main thread architecture which moves application logic and client state to web workers - significantly increasing development effort. Svelte garnered attention because it is often capable of delivering a much better user experience (than React) to hyper constrained client devices by keeping the JavaScript payload small and the CPU requirements low - while also maintaining a good developer experience.
The issue is that in many cases components aren't a zero cost abstraction at runtime. So while component boundaries may be valuable at design time, the cost shouldn't extend beyond compile time. Frameworks/tools should favour abstractions that only impose run time cost where there is a runtime benefit - all other (design time) abstractions should ideally evaporate at compile time.
Maybe. But also I think that user experience is the thing that we care about. Performance, understood here as CPU bound, is one of many proxies to that (look and feel, network, offline experience, etc.). The idea is spending time chasing an X% improvement in "performance" that is not noticed by the target user is a waste of engineering resources. Microbenchmarks being by design not representative of the user experience are interesting for library makers but not that much for people picking libraries. That is, you would not pick a framework or library based on microbenchmarks. So that is why I never find the arguing over a limited definition of performance in unrealistic conditions remotely insightful.
JavaScript is not a zero abstraction either and you pay most of it at runtime. Should we compile JavaScript to binaries and send that to the browser? Compiling is great, inlining is great, anything to make the code run faster is great but my point is that it is not free. There are tradeoffs and I want to have a look at the full picture before adding yet another layer of complexity in a landscape that is already crowded.
Yet the article is about removing constraints caused by the current abstraction. The foundations here predate React or this current component centric view and are echoes from a simpler time.
That being said I'm not suggesting going back there. My argument here has been about removing cognitive overhead of contending with 2 competing languages for change on the React side, VDOM vs Hooks, and liberating the non-VDOM from unnecessary imposed runtime overhead that hurts its ability to scale.
But if that isn't convincing enough consider the implications on things like partial hydration. This has much larger performance implications.
When I step back this isn't micro optimizing but adjusting the architecture to ultimately reduce complexity. Nothing like leaky abstractions to add undue complexity. Every once in a while we need to step back and adjust. But like the pool I bought last week that won't stay inflated, it often starts with finding the leak.
How can you be sure that it isn't noticed? Squandered runtime performance is an opportunity cost to user experience.
A Quest to Guarantee Responsiveness: Scheduling On and Off the Main Thread (Chrome Dev Summit 2018)
And there are costs to the business as well:
Marissa Mayer at Web 2.0 (2006)
Google Marissa Mayer speed research
web.dev: Why does speed matter?
JavaScript is the means for browser automation. Ideally most of the heavy lifting should be done by capabilities within the browser itself coordinated by a small set of scripts. Unfortunately many JavaScript frameworks and libraries decide to do their "own thing" in pure JavaScript potentially bypassing features that are already available on the browser.
So tooling which emits the minimum amount of code necessary to get the job done sounds like the logical next step.
And at the risk of repeating myself:
Data-Oriented Design: Mapping the problem (2018)
More than a decade ago part of the game industry, being constrained by having to deliver optimal user experiences on commodity hardware, abandoned object-orientation as a design time representation because the consequent runtime inefficiencies were just too great. In that case it lead to a different architecture - Entities, components and systems ECS - aligned with the "machine" rather than the problem domain.
Similarly in the case a (web) client application the "machine" is the browser. "Components" neither serve the browser nor the user at runtime - so it makes sense to make them purely a design time artefact that gets erased by compilation - or perhaps "components" need to be replaced with an entirely different concept.
In the same way, the bundler removes the ESM modules, frameworks can remove the components. If you were loading each ESM module independently I would argue that is unnecessary overhead. And it's the same thing here.
I'm not saying people won't modularize their code and write components. Just that they aren't needed to be mechanical part of the system and we should explore removing their weight. It started from a performance perspective but it has DX implications too.
I am familiar with the idea of driving everything from above. I'm just wholly not convinced. There are similarities to MVC and MVVM and those are perfectly good models. In fact, I think there is a good portion of pretty much every app that benefits from this. However, at some point, the rubber meets the pavement.
And sure you can write everything VanillaJS. That's always an option. As is hoisting state. The same can be said for web components. But there is that zone where cohesion matters and where I'm focusing. This is the domain of UI frameworks.
The reason that React and frameworks are looking so hard at solutions here is that they are essentially trying to see if we can hoist state but let the authoring experience be that of co-location. It's a sort of inversion of control-like pattern. Solid is like if you decided to write a renderer from a state management solution, and in so we kind of stumbled on a solution to achieve exactly that.
The too few or too many component issues still transfer outside of the components themselves. It's true of state management too. Any sort of hierarchical tree where there is ownership/lifecycles and the need to project that onto a different tree. I think it is important to see there are 2 trees here but just as important to not force things too far in either direction. Pushing things up further than they want is bad for different reasons than pushing things too far down.
That's really the whole thing here. About removing unnecessary boundaries from misalignment. Modularity has its place but you don't need a JavaScript framework to give you that. It comes down to the contract of your components. Some things naturally are coupled so why introduce the overhead in communication there as well. The problem with common frameworks is you aren't breaking things apart because they are too large but for some other reason. I want to remove that reason. Breaking stuff apart is perfectly fine but why pay the cost when it comes back together?
Overhead, maybe, unnecessary, not sure. There is the costs of things, and then also their benefits. So you need to sum both.
Sure. That is the same idea than dead code elimination, i.e. not bundling library code that is not used. But that does not mean libraries are overhead right? The dead code sure is.
Interestingly, that may be the zero overhead solution. But in the article I was not advocating using Vanilla JS only. With Functional UI you can still use React for instance but you would only use pure components. Pure components are actual modules. The module interface is the parameters of the function. They depend only on their parameters, that makes them independent, so they can be kept separate and reused in many places. They compose easily because they are functions. In fact, we haven't found yet a much simpler way to compose/decompose computations than functions.
Now, a user interface application can be seen as a series of computations (reactions to every incoming event - that;s Functional UI), but also as a process that is alive as long as the browser page is opened. So how to modularize long-lived processes? There have been several answers to that question. In microfrontend architectures for instance, modules are mini-applications that can be deployed entirely independently. They communicate with other modules through message passing (or events) to realize the whole application behavior. Mini-apps as module come with their own set of tradeoffs and overhead, but those who adopt that architecture find it worth the independent deployability advantage that they get. You can have different teams working completely independently on the smaller parts which gives you development velocity. But that's just one way to modularize, there are others.
Yes, cohesion/coupling is a discussion worth having. What makes sense to group together? What is the shape of that group? How do the groups communicate? etc.
Absolutely agree. How do we modularize in ways that preserve properties of interest? That is a discussion worth having.
Sure but also why not? Costs have to be put in front of benefits. For instance, not paying the cost of reassembly (when your small modules become a big one) through compilation may have other costs that are not discussed or obvious; and/or produce benefits that are not worth the trouble. I can't talk about what you are proposing because I don't know what that is.
The general idea to be efficient or economical is a good one, that is the base of any proper engineering approach. But my point is that the devil is in the details. So I am looking forward to seeing how you approach the problem, and what are the benefits that your approach will provide, the associated costs, and what the sum of that looks like.
This part confused me, but I may have misunderstood what you were going for. Please correct me if I'm mistaken, but when a parent component re-renders, so will all of its children, unless those children:
shouldComponentUpdate
that always returnsfalse
(basically same as #1 but more explicit).React.memo
HOC.I'm over-generalizing. So apologize for the inaccuracy. That differs between VDOM implementations. Some do check the props directly. But you are correct that is what it means for React specifically.
Ah, all good! I thought I'd misunderstood something.
Thank you for this thoughtful analysis. I love this point:
IMO the real advancement that most UI libraries/frameworks today do is declarative UI. This brought a gigantic leap in developer productivity compared to the jQuery or vanilla DOM manipulation days. Of course they all tend to mitigate their success with a component model, which gives a concrete place to start but leads people to abstract prematurely and pay the overhead price, then the price to change that overhead later.
We like the MVU pattern. It lets us create abstraction as we identify it, not before. And we happen to use React for rendering only, not state or components-orientation, although components may be created automatically/transparently based on our declared HTML.
I've been seeing a lot of this and it is a reasonable solution to the problem. I just immediately wasn't happy with the fact we were still feeding into React etc.. I actually did a cool experiment(super rough) with XState where I granularly applied updates. So I think these ideas could play together nicely:
Our toolchain (F#/Elmish/Fable) had another option for rendering. But with a preponderance of React developers I think it didn't make sense for the maintainer to keep it. The important point here is that our developers don't have to know or care what the renderer is. We had to use keyed in some cases to make React do the right thing, but otherwise it just looks like HTML (as F# functions) to us. Our organization/abstraction strategy is at the language level rather than the UI library level. And it is possible to switch renderers if an alternative presents itself and the need arises.
We could use React bells and whistles (some projects do), but we choose not to. We don't want to get pulled into over-abstracting.
In a sense, you've bought in on a different framework. Instead of betting on the renderer, you are betting on your business logic which is a good bet to make. What is interesting to me is that frameworks are continuing to develop new techniques like Svelte's animation or React's Concurrent mode or Server Components which are unique to them.
Obviously, we can opt not to use these features but to me, this is very much a different type of framework choice. Keyed is one thing, but there is a reason I'm convinced this isn't easily universalizable as much as I support research in the area even in things like Marko (idea is that HTML with some extensions as language could map to any framework).
I'm very interested in what the renderer is doing since only through its primitives do we have the ability to have the knowledge to fully leverage things like compiler analysis. There are benefits to separation but that always comes with a tradeoff in terms of optimization.
I assume we might have to make some React-specific tweaks if performance becomes an issue. But we're still waiting for that day. I suspect it has to do with following FP, which should only create pure components.
I wrote about it here. Edit: Well, just the organization part. I suppose the tech is glossed over a bit.
I think client-side is plenty optimal in general. We are getting the limit of what we can do here. Honestly I think this is the last overhead to remove in the browser rendering part of the equation. And we are only getting to those last dozen percentage points or so. I like solutions like what you are proposing because if you ever do hit the need your decision will be more empirical. You will go with the smallest/fastest choice as you aren't that dependent on the renderer's features.
But as an author here I'm always going to be looking to do better. The real push here I imagine will be coming from Server Side rendering and isomorphic solutions. I didn't really touch on this but that same ability to analyze state can improve bundling. Tooling is the next frontier of JS frameworks. Fatigue is ending, now it's time to battle complexity.
I totally feel you! ❤️ Components should die as a concept and not be a bound to rendering and reactivity. Though, I think you could explain a bit more in the article, what are the components in your meaning as I'm 100% sure ppl will relate to it as code modularization, which is not the point.
Anyway, I'm so happy someone is moving in the same direction.
Modularization is still important, just the re-rendering UI component as we know from React is restrictive. And this seems to be the one thing most libraries share even if they do it differently. Which is probably why this article can be classified under unpopular opinion.
But creators understand:
If you create 50 000 variables instead of using a loop you'll notice significant performance drop. Does that mean that variables are doomed to be replaced with separate pointers and values to cover that use case or simply that it's not supposed to be used this way?
The problem with frameworks is not their limits but inability to fine-tune them to your needs. This goes hand in hand with how we obfuscate and distribute JS dependencies.
Imagine CRA, but with "framework" functionality hosted in lib folder. This way you could suggest a recommended structure yet let coders decide what's better for their project and avoid unnecessary functionality/overhead. That's what I'd love to see in the future.
Best regards, lurking purist:-)
If I was writing an optimizing compiler. Maybe.
I'm not quite following the CRA example. I'm gathering you mean something different than tree-shaking. Frameworks like Svelte are playing at only including the code you need by abstracting the underlying JS with nice DSLs. In so they narrow the band and capture intent better. Even things like JSX do this to some degree.
I was saying that the language of reactivity or hooks makes for a powerful DSL to describe application updates without relying on the components for that. Svelte already is going along this path. I find it takes a runtime solution to motivate a compile-time one. We do things manually before we automate them. If this unlocks this sort of capability at runtime. Compilers will follow (as the tooling becomes sophisticated enough to follow)
TL;DR The problem of framework abstraction are not abstractions themselves, but rather inability to change them.
I was saying that I'd like to have control over the tech I'm using, control over its code. You would agree that forking React or Svelte to incorporate it in your codebase is a nightmare. But JavaScript frameworks can and should be distributed as code, as a template project with small editable library functions.
I agree with you, optimization will always be needed. Not ephemeral one-size-fit-all optimization, but a custom, particular project oriented fine-tuning. And the most efficient way to do that is to simply edit the code.
Ok gotcha. Hmm.. first I've heard this particular argument. React is on one side with heavy VDOM abstraction at runtime, and Svelte on the other side where compiler takes care of everything.
I'm going to take note of this because Solid's everything is just a reactive primitive lends to this. Not sure what to do with that though. Templating is the one place where there is always a lot of code. Even things like Lit. Diffing solutions aren't really end-user tweakable and less diffing solutions like Solid are bulkier without leveraging tools like compilers.
I wonder if it would be feasible, using a sufficiently advanced compiler, to not only make the divisions between components vanish in the generated code, but also translate reactivity into optimal imperative updates. That is, while the source code would be written in a declarative, reactive style, the generated code would have no signals, observers, memoization effect functions, etc., just direct imperative updates to the relevant DOM nodes colocated with the corresponding change in the data model, as if we had written it by hand in the painfully hard-to-maintain way. It would be better yet if we could eliminate all list diffing, though that might require us to change the way we fetch data from the server. Would this level of compile-time magic be too much to hope for?
Bingo. Svelte actual does this at a component level. There is no actual subscriptions etc... The only thing is the component itself. But if you could take this further you could make all the code compile this way. I'm not sure we can get rid of list diffing. But for like MPA style frameworks you could probably not have it in most places.
And so it turns out this knowledge of what is reactive actually lends to Partial Hydration because you can instantly understand what could change at a subcomponent level. You could literally ship the least amount of JavaScript to the browser.
I'd be lying if I was to say I wasn't working on a project already that is on its way to doing all of the above.
I'm guessing you're talking about Marko. If so, I can't wait to see where it's headed. Maybe it's too soon for you to answer this, but if I were to start writing an application now using Marko 5, would I need to do a big rewrite when Marko 6 comes out?
The syntax is pretty locked. At least grammatically, open to suggestions on exact syntax. dev.to/ryansolid/marko-designing-a.... But it's a bit like the move to React Hooks. Old Marko will work(slightly less optimally) but if you are going to rewrite everything with Hooks anyway, I would see the desire to wait.
We have already written benched the server-side compiler and we've closed the gap with Solid in raw SSR speed. What is left to do is the browser runtime and subcomponent hydration. I'm going to write an article in more depth on this in the future. We started with basically a pre-optimized runtime reactive strategy but we basically were limited by the ability to analyze intention by the compiler (similar to Svelte assumes let's are signals more or less). With Solid explicit control gave it performance edge in that regard. But at the same time new Marko still had a certain amount of runtime overhead. It was fast, like slightly ahead of Svelte in the fast VDOM range. But we weren't happy that it felt like a compromise in a sense.
But about a month ago Michael Rawlings had an epiphany in terms of how to achieve Svelte's compile away reactivity with fine-grain component independent updates. We're still vetting that, but early benchmarks indicate we've succeeded removing the majority of the overhead of the framework. I will share more as I have more concrete things to share.
Good article Ryan. I agree with all of your performance reflections. But, IMHO the problem with components is not in those concerns but in how they are being understood & used by the community.
Web components, at least the standard ones, were devised as a reuse solution and not a modularization one as @brucou greatly argues. From an old-fashioned perspective of Web Experience where it is assumed that users need & want to work with closed applications in an 80s style running on browsers, it could be true that here Web components technologies have not a relevant role.
However, the world is rapidly changing, and frequently we as developers don't realize them. Whilst users nowadays demand new interaction models based on an omnichannel multi-device world where interaction experiences are based on oral dialogues or micro-gestures on tactile watches & screens, developers go on creating fenced solutions based on Web & Mobile technologies far from what users expect.
As developers, we should be concerned with providing new experiential models aligned with user demands. A dentist appointment should be first a mail attachment, then a calendar date, and then a notification in my preferred wearable. In that no-fences world, experience flows liquidly from one channel to another (mail, web, push notification) and from one device to the next. Experiences are immersive. A youtube video is a small player on my mobile while I'm coming back home traveling in the underground. Then when I arrive at home, the video becomes a full experience simply by means of a gesture pointing to my smart TV while I relax on my sofa taking a cup of tea.
In this new world, Web components are the basis for supporting omnichannel multidevice liquid experiences where the interaction model with user traverses over more than a single application. Now closed web experience and in particular, the application term is a forbidden word because users are not interested in this kind of metaphor.
From the B2C point of view, businesses use Web components as a means to create a corporative dialog with final users. Components are transactional access points to enable fresh business spreading strategies around the Web. There is not a single corporative Web where a corporation centralizes user dialogs. There are a lot of interactions on my Google results, my ads on the web, my voice assistant in the car, my watch notifications, etc. All of those realities are Web components.
From a B2B perspective, REST APIs are becoming declarative HTML-centric dialects allowing business experts to insert access points to other businesses on the Web. Cloud-based companies offer payment, commerce, or whatever solutions and I only need to insert HTML snippets for those companies to get easy & straightforward integration with them. Here Web components work collaboratively on foreign webs as DSL to create a declaratively organic experience based on composition including both own & external tag families.
The user demand is nowadays a reality. People don't write down the text to whatsapp. They dictate to the microphone. Components are the technology ready to be used in that direction to abandon silo-based experiences. Just the question is about when we, as developers, will realize that the world has changed.
I've definitely said for the longest time Framework UI Components aren't the same Web Components. Different goals etc. Interopt goals of Web Components are also different than the re-usability goals. I actually wrote a whole article on this that I haven't published as of yet.
Web components are a wonderful widget platform, but I'm not convinced anything beyond the most basic are great application building blocks. I was talking with Justin from Lit a few months back when we looking at the potential of using the Declarative Shadow DOM at eBay and the one thing that was clear at least from his perspective is that for Web Components to work across environments you will be relying on libraries/frameworks. There are always going to be gaps in the standards. We end up replacing one type of framework with another.
On one hand, I don't think these things need to be at odds with each other as the framework can live inside the component. On the other hand when you hear people like Rich Harris talk about Web Components (and I am generally in agreement), when compared to the power he has to optimize and orchestrate with Svelte especially around things like animations there are clear tradeoffs. Not everyone needs this sort of orchestration but SPAs exist for a reason.
There are places where this interopt is key and there are others where optimized single experience trumps. It's where they meet that is interesting I think.
This is so hard because in terms of production Marko is rock solid but I mean we all want to play with the new toys. And the new Marko brings a lot of cool things. We are still looking for later this year, and hoping to do a beta release this summer.
Maybe we can meet half way. Projects like Vite + HMR have made the dev experience a lot smoother. At some point of the process we are backporting the new Marko syntax back into Marko 5. This will mean while you won't get to leverage the technology you will be able to slowly migrate old projects, or in your case use the new syntax so the bump to Marko 6 won't require a code change to benefit. Obviously writing new transformations for the compiler is a time investment against working on Marko 6 but it sounds like there is interest here.
I will look at what it takes to get this out.
What approach do Marko lsp-server use for the type checking going to be?
I was waiting for more information from Dylan but he's out this week. To my knowledge probably something similar to Svelte. We see the limitations and were sort of surprised by them, but if that is enough to satisfy the requirements and it's quicker it can help perception a lot. Getting TS in the templates would be a huge win as it is and we want to prioritize the architectural considerations first.
I spend a decent amount of time pointing out the tradeoffs of framework decisions and benchmarking with vanilla js as the control. Which is constantly met with more or less, then stop using a framework, or see vanilla wins all the benchmarks. I figured a section header that reads "Your Framework is pure overhead" was just inviting more of the same. Since it is really beside the point and doesn't lend anything to the discussion.
Great article, I share most part of your tech vision, but with a different point-of-view.
Traditional Frameworks (and Svelte) are only "Components" for the Programmer.
To the User the end result "Product" is one big monolith.
From 1994 onward, I saw the Web grow big because Users could easily copy "code" from other websites.
WWW wasn't the only technology back then.
But it was the only technology where entry-level was low.
And we can't but agree "Web Development" has turned into something for-rocket-scientists-only
Using Frameworks (and Svelte) is like buying a IKEA Billy bookcase glued together, never to be taken apart again. Unlike early Web days, it is impossible to learn how to build/copy/enhance/extend your (own) bookcase.
That is not how Tim Berners-Lee envisioned the Hyper-Web!
Web Components technology is not about technology.
Web Components are about modularizing the whole stack.
Functionaly like how we use CDNs and Libraries.
(Alas, Lea rightly complained; there is no technology yet to rate & share good Web Components)
Web Components are Web Components are Web Components:
Web Components Technology
Can the implementaion be made better?
Sure, Apple, Google, Mozilla and Microsoft are activly working together
Will the implementation be made better?
Yes, bright minds like Rich Harris inspire others
And don't forget, CPUs still get faster every year.
"Performance" is becoming a non-argument fast
PS. Most currect Web Component developers are developing monoliths.
Largely repeating my earlier comment:
So even if there are faster CPUs every year, trends are conspiring so that a web application will more frequently encounter devices with lower single thread performance - which is a problem as most third-party browser technologies are still single threaded (Why can't we just make everything multithreaded?; meanwhile the browser itself is moving many non-JS tasks off-the-main-thread). At this point in the game being able to do everything on the main thread simplifies your application development, while leveraging web workers introduces you to an entirely new set of trade-offs. So getting the most work out of the main thread is still very much an issue.
Also I really wanted to like Web Components.
Having been introduced back in 2011 they seem to be a product of that time where most innovations were entirely client focused (i.e. CSR). By the time they became viable, CSR frameworks were already scrambling to retroactively bolt on SSR, while Web Components seemed to lack a server-side/hydration story. Being able to split server and client-side aspects of a component or being able to share the markup template(s) in an implementation agnostic manner would seem like a good idea.
The most important aspect all these discussions forget is:
It has NOTHING to do with technology
In august 2019 the W3C and WHATWG agreed the WHATWG would be in the lead on Web development.
The W3C will only give the final "its a standard" approval.
The WHATWG is By-inivitation-only
And to date, Apple, Google, Mozilla and Microsoft haven't invited Facebook yet.
and it is ALL about technology
This means no single company can get away with single-company dominanting technologies
If you follow the threads, you see 4 companies more and more working better together,
something I have never seen in my 31 active Internet years.
And ofcourse they are slow... they all have to agree. I can't even agree with my wife on everything.
So/but the "V1 Web Components" standard (V0 was a Google party, not a standard) will only get better
And yes, Facebook "owns" 60-70% of the Front-End market, and doesn't even mention Web Component technology in the last React release.
Once AltaVista owned the search market
Once IE had 90% of the Browser market
Once Flash was installed on every device
React is the new Cobol
Apple (WebKit), Google (Blink), Mozilla (Gecko), Microsoft (Trident/EdgeHTML), Facebook (?).
I think the point you are trying to make is that Web Components are a standard while React is not.
However not all standards are adopted by the industry as a whole.
Example:
AppCache: Douchebag
Also from the article's author: Maybe Web Components are not the Future?
React merely has a visible, vocal support base - which makes its component model seem popular.
With reference to what 100%?
Usage statistics and market share of React for websites
React Usage Statistics
jQuery Usage Statistics
PHP Usage Statistics
I don't really understand the point of this article. Suggesting components are pure overhead implies that one can omit / remove them. I mean, you can, I guess, and that's how we did things in the mid-2000s era of web development. This was legitimately a terrible approach compared to modern practices, or, at the least, not very maintainable or scalable.
It feels like this article is in one moment critiquing Svelte or compiler-based approaches, and then goes on to imply an alternative which is the seemingly-same approach.
It suggests "primitives" as an alternative to "components", but AFAICT, such primitives, to be declarative in nature, would require a rethinking of HTML / DOM and current web component architecture. Not that I disagree with this, and there have been very good critiques & criticisms of web components / custom elements from day 1 of the v0 proposals, but it also means that nothing suggested here is currently feasible.
So what is the point here?
I'm talking about disappearing runtime components. I expect people to still author with components. You are right that just removing components would be unscalable but execution-wise it has benefits. I'm suggesting that we need to lean on conventions/compilers if we want declarative authoring but we can do better on who things execute.
My critique about Svelte is that they don't actually remove runtime components. Yes they have no VDOM but their whole update cycle is still component based. You invalidate state and a component re-renders. I don't have problem with a sufficiently advanced compiler and that was what I was hinting at, at the end, but Svelte is not it today.
I'm suggesting that if you make the change cycle independent of the components at runtime the fact that you broke your code into different modules (components) is irrelevant to how they run. Not only is this a door to performance improvement but it removes that whole dance you have to do in frameworks where the decision of how you break up your components impacts how they run. It impacts the code that needs to be present in the browser, the code that needs to execute for hydration. But how you organize your code, shouldn't matter.
I don't think we necessarily need to change the underlying platform, then again almost all frameworks are pushing against the limits. It's why we are so compiler focused these days but our concepts of how these JavaScript frameworks work need to evolve. I believe (and we are already seeing this in things like Qwik) that this is the next unlock. Everyone picking up Signals is just the start.
My favorite quote is Antoine de Saint-Exupéry's:
This article sums up my feeling (which I haven't been able to properly articulate in writing so far) around the complexity we've dug ourselves into, which prevents us from seeing the simplicity of "natural" (perennial-like) HTML and CSS. And while vanilla CSS is appropriately hailed by comparison with Tailwind (as a "worthy" alternative) in this article or even (maybe unjustly) glorified in this one, I felt the need for someone to write a Dev.To article mirroring what Jeremy Keith or Chris Ferdinandi have been saying for years, with a more applied approach to the fundamentals of frameworks and why they exist. So thank you for this article!
Here are some other equally satisfying insights into this way of thinking: The unreasonable effectiveness of simple HTML by Terence Eden, My stack will outlive yours by Steren Giannini or The web didn't change; you did by Remy Sharp. Pretty sure there are a lot more out there...
Great article!
A lot of of what you say reminds me of the downside people find in OOP: at some point you need so many accessors and shared methods that your object hardly encapsulate anything. More often than not, components are indeed implemented as classes-or something very close to it and you end up with an absurd amount of props and cumbersome ways to share them.
I recommand the most excellent [talk by Catherine West] and would quote two things:
It's for Rust and game dev, sure, but I totally relate for web dev. Sometimes I just want a simple function. Sometimes I would just like a few UI elements to "killable" and have that killable define somewhere as a simple function.
I used to think as framework as a ways to suppress the mess, but that's a lie. At best you hide it in patterns, but if what you design is messy, the functions will be. Today I'd rather have a simple framework that makes my life easier on two aspect:
My days of doing DOMless components or trying to fit a Mapbox entities in Svelte are over! I did it. I won't do it again 😄
I loved this article! I have been coding websites for over 20 years and I've seen patterns come and go. I've seen problems, the "solutions" to those problems, and how those solutions became a standard somehow. Then those standards grew and grew, their ecosystems became beasts, and at some point the whole thing was totally replaced by something simpler, a new solution. And the whole cycle started again. We're repeating ourselves, and the common remark usually tends to sound nostalgic of the good old times. Has anyone used Visual Basic 6? ASP Components (not current ASP.net but back in 2001)? The same concept with a different syntax has survived and it's what we're using today, with a lot of goodies and sugar on top.
I wish we all knew the original problems with "HTML and a sprinkle of Javascript", and how did we get from there to today's "standards" were you barely write HTML anymore. It's an interesting transition and if you know why we got here, it's way easier to see through the trends and hypes and find real answers to common problems.
My personal opinion is that small libraries with simple APIs are the way to go, but one way or another they all find their way to overengineering.
How would this look in, say, Svelte? Does it change the way you write Svelte or does the compiler do the heavily lifting? Does DX change in some clear ways?
I think the compiler could do most of the heavy lifting. The biggest limitation for Svelte would be structurally the SFC files themselves. They separate stuff too much to fully leverage what I'm talking about, but maybe that's fine. Like if you wanted to add state in an iteration of the
{# each }
helper in Svelte that would require breaking off to a separate file.Not that this level of adoption is essential but just to illustrate my point. In Solid's JSX you could literally just create state inside the loop helper iteration. We can create state inside ref callbacks too. Things like Svelte Actions
use:
bindings can be inlined or not, because it's all the same thing. In Solid's case it's because the reactivity is runtime.If a compiler had access to all the code you imported and the syntax well understood the way it is in .svelte files you could do this as a compiled approach. This doesn't just mean removing the overhead between component communication. But things like Svelte Stores would not be necessary as you could literally just write the
let
and$:
operator anywhere. Svelte components more or less topologically sort and wire execution to imitate a typical runtime reactive library what if that extended beyond files you use for components. There would be opportunities to revisit Svelte's SFC format, but the fundamentals would stay the same.I think the common assumption is this isn't possible. But some of the recent work we've been doing on Marko suggests otherwise. Still exploring the implications of this and I will share more as we uncover it. But that's what makes this exciting.