DEV Community

Discussion on: Components are Pure Overhead

Collapse
 
brucou profile image
brucou • Edited

The thing is, most abstractions come with some overhead... And we still use them happily because they are also useful. So I don't see the focus on overhead or performance as particularly interesting in general. But modularity, separation of concerns, cohesion/coupling, declarativeness -- that is the kind of things I think we should think about much more. That would be worth in and of itself a series of articles.

Long story short, components are not going anywhere because modularity remains a necessity for any codebase of reasonable size. What exactly is a component may vary, but the idea that you split a big things into small things because the big thing is too big for certain purposes, that is not going away.

Modules being separate, standalone units, they facilitate reusability in miscellaneous contexts, which then positively impacts maintainability, as you mention.

Modularity also relates to composition because the small things must be factored back into the big thing. A good modularity story must go hand in hand with a good composition story.

So the interesting question for me is how to modularize programs or applications.

Talking about UI frameworks, I noticed that:

  • components are not often separate, standalone units (because they rely on some context or external information to perform computations)
  • components are often not reused

In other words, modularization of web applications is more often than not suboptimal and instead of the spagetthi of imperative programming, we have the spagetthi due to many components that interact with each other in obscure ways through undeclared dependencies on external data. I discuss that at length in my Framework-free at last post.

To be real modules, components should be as independent and interchangeable as ESM modules are. That in particular means that they should have an interface that allows predicting the entirety of their computation, so the program that uses the component need not depend on its implementation details and reciprocally the component needs not know a thing about the program that uses it.

So the future is not component-less in the least. In the same way, the fact that ESM modules can be bundled into a single file does not mean that ESM modules are unnecessary overhead. But we may indeed be interested in better ways to modularize our code, that is, a better componentization story that we have as of now, because a lot of what we call components are not actual modules, which seriously complicates, as we know, the composition story.

So I am thinking let;s see how the story continues and how you will address modularity in whatever it is that you propose.

For those interested in modularity, coupling and cohesion: http://cv.znu.ac.ir/afsharchim/T&M/coupling.pdf

Collapse
 
ryansolid profile image
Ryan Carniato • Edited

In the same way, the fact that ESM modules can be bundled into a single file does not mean that ESM modules are unnecessary overhead.

In the same way, the bundler removes the ESM modules, frameworks can remove the components. If you were loading each ESM module independently I would argue that is unnecessary overhead. And it's the same thing here.

I'm not saying people won't modularize their code and write components. Just that they aren't needed to be mechanical part of the system and we should explore removing their weight. It started from a performance perspective but it has DX implications too.

I am familiar with the idea of driving everything from above. I'm just wholly not convinced. There are similarities to MVC and MVVM and those are perfectly good models. In fact, I think there is a good portion of pretty much every app that benefits from this. However, at some point, the rubber meets the pavement.

And sure you can write everything VanillaJS. That's always an option. As is hoisting state. The same can be said for web components. But there is that zone where cohesion matters and where I'm focusing. This is the domain of UI frameworks.

The reason that React and frameworks are looking so hard at solutions here is that they are essentially trying to see if we can hoist state but let the authoring experience be that of co-location. It's a sort of inversion of control-like pattern. Solid is like if you decided to write a renderer from a state management solution, and in so we kind of stumbled on a solution to achieve exactly that.

The too few or too many component issues still transfer outside of the components themselves. It's true of state management too. Any sort of hierarchical tree where there is ownership/lifecycles and the need to project that onto a different tree. I think it is important to see there are 2 trees here but just as important to not force things too far in either direction. Pushing things up further than they want is bad for different reasons than pushing things too far down.

That's really the whole thing here. About removing unnecessary boundaries from misalignment. Modularity has its place but you don't need a JavaScript framework to give you that. It comes down to the contract of your components. Some things naturally are coupled so why introduce the overhead in communication there as well. The problem with common frameworks is you aren't breaking things apart because they are too large but for some other reason. I want to remove that reason. Breaking stuff apart is perfectly fine but why pay the cost when it comes back together?

Collapse
 
brucou profile image
brucou • Edited

If you were loading each ESM module independently I would argue that is unnecessary overhead.

Overhead, maybe, unnecessary, not sure. There is the costs of things, and then also their benefits. So you need to sum both.

Just that they aren't needed to be mechanical part of the system and we should explore removing their weight.

Sure. That is the same idea than dead code elimination, i.e. not bundling library code that is not used. But that does not mean libraries are overhead right? The dead code sure is.

And sure you can write everything VanillaJS

Interestingly, that may be the zero overhead solution. But in the article I was not advocating using Vanilla JS only. With Functional UI you can still use React for instance but you would only use pure components. Pure components are actual modules. The module interface is the parameters of the function. They depend only on their parameters, that makes them independent, so they can be kept separate and reused in many places. They compose easily because they are functions. In fact, we haven't found yet a much simpler way to compose/decompose computations than functions.

Now, a user interface application can be seen as a series of computations (reactions to every incoming event - that;s Functional UI), but also as a process that is alive as long as the browser page is opened. So how to modularize long-lived processes? There have been several answers to that question. In microfrontend architectures for instance, modules are mini-applications that can be deployed entirely independently. They communicate with other modules through message passing (or events) to realize the whole application behavior. Mini-apps as module come with their own set of tradeoffs and overhead, but those who adopt that architecture find it worth the independent deployability advantage that they get. You can have different teams working completely independently on the smaller parts which gives you development velocity. But that's just one way to modularize, there are others.

Some things naturally are coupled so why introduce the overhead in communication there as well.

Yes, cohesion/coupling is a discussion worth having. What makes sense to group together? What is the shape of that group? How do the groups communicate? etc.

The problem with common frameworks is you aren't breaking things apart because they are too large but for some other reason.

Absolutely agree. How do we modularize in ways that preserve properties of interest? That is a discussion worth having.

Breaking stuff apart is perfectly fine but why pay the cost when it comes back together?

Sure but also why not? Costs have to be put in front of benefits. For instance, not paying the cost of reassembly (when your small modules become a big one) through compilation may have other costs that are not discussed or obvious; and/or produce benefits that are not worth the trouble. I can't talk about what you are proposing because I don't know what that is.

The general idea to be efficient or economical is a good one, that is the base of any proper engineering approach. But my point is that the devil is in the details. So I am looking forward to seeing how you approach the problem, and what are the benefits that your approach will provide, the associated costs, and what the sum of that looks like.

Collapse
 
peerreynders profile image
peerreynders

So I don't see the focus on overhead or performance as particularly interesting in general.

I think that this position is informed by past experience on the back end and on desktop.

  • Personal (i.e. client side) computing has shifted to handheld devices.
  • While some flagship devices are still increasing the fat CPU core single thread performance, average devices are opting for a higher number of low power/small core CPUs with lower single thread performance.
  • Moore's law is done.
  • While improved mobile network protocols promise performance gains under ideal conditions, growth in subscriptions and consumption can quickly erode gains on the average connection.

As a result future average device single thread performance and connection quality could be trending downwards under many circumstances.

Aside: The Mobile Performance Inequality Gap, 2021.

The headroom necessary to accommodate the overhead of microfrontends may exist over corporate backbones - not so on public mobile wide area networks. So in many ways a lean perspective, much like in the embedded space, is beneficial in web application development.

Most of React's optimizations in the last few years were geared towards getting the most out of the client's single (main) thread performance in order to preserve the "perceived developer productivity" of the Lumpers component model where "React is the Application/Architecture" to forestall the need to adopt an off-the-main thread architecture which moves application logic and client state to web workers - significantly increasing development effort. Svelte garnered attention because it is often capable of delivering a much better user experience (than React) to hyper constrained client devices by keeping the JavaScript payload small and the CPU requirements low - while also maintaining a good developer experience.

components are not going anywhere because modularity remains a necessity for any codebase of reasonable size.

The issue is that in many cases components aren't a zero cost abstraction at runtime. So while component boundaries may be valuable at design time, the cost shouldn't extend beyond compile time. Frameworks/tools should favour abstractions that only impose run time cost where there is a runtime benefit - all other (design time) abstractions should ideally evaporate at compile time.

Collapse
 
brucou profile image
brucou • Edited

I think that this position is informed by past experience on the back end and on desktop.

Maybe. But also I think that user experience is the thing that we care about. Performance, understood here as CPU bound, is one of many proxies to that (look and feel, network, offline experience, etc.). The idea is spending time chasing an X% improvement in "performance" that is not noticed by the target user is a waste of engineering resources. Microbenchmarks being by design not representative of the user experience are interesting for library makers but not that much for people picking libraries. That is, you would not pick a framework or library based on microbenchmarks. So that is why I never find the arguing over a limited definition of performance in unrealistic conditions remotely insightful.

The issue is that in many cases components aren't a zero cost abstraction at runtime. So while component boundaries may be valuable at design time, the cost shouldn't extend beyond compile time. Frameworks/tools should favour abstractions that only impose run time cost where there is a runtime benefit - all other (design time) abstractions should ideally evaporate at compile time.

JavaScript is not a zero abstraction either and you pay most of it at runtime. Should we compile JavaScript to binaries and send that to the browser? Compiling is great, inlining is great, anything to make the code run faster is great but my point is that it is not free. There are tradeoffs and I want to have a look at the full picture before adding yet another layer of complexity in a landscape that is already crowded.

Thread Thread
 
ryansolid profile image
Ryan Carniato

Yet the article is about removing constraints caused by the current abstraction. The foundations here predate React or this current component centric view and are echoes from a simpler time.

That being said I'm not suggesting going back there. My argument here has been about removing cognitive overhead of contending with 2 competing languages for change on the React side, VDOM vs Hooks, and liberating the non-VDOM from unnecessary imposed runtime overhead that hurts its ability to scale.

But if that isn't convincing enough consider the implications on things like partial hydration. This has much larger performance implications.

When I step back this isn't micro optimizing but adjusting the architecture to ultimately reduce complexity. Nothing like leaky abstractions to add undue complexity. Every once in a while we need to step back and adjust. But like the pool I bought last week that won't stay inflated, it often starts with finding the leak.

Thread Thread
 
peerreynders profile image
peerreynders

The idea is spending time chasing an X% improvement in "performance" that is not noticed by the target user is a waste of engineering resources.

How can you be sure that it isn't noticed? Squandered runtime performance is an opportunity cost to user experience.


A Quest to Guarantee Responsiveness: Scheduling On and Off the Main Thread (Chrome Dev Summit 2018)

And there are costs to the business as well:

In A/B tests, we tried delaying the page in increments of 100 milliseconds and found that even very small delays would result in substantial and costly drops in revenue.

Marissa Mayer at Web 2.0 (2006)
Google Marissa Mayer speed research

web.dev: Why does speed matter?

JavaScript is not a zero [cost] abstraction either and you pay most of it at runtime.

JavaScript is the means for browser automation. Ideally most of the heavy lifting should be done by capabilities within the browser itself coordinated by a small set of scripts. Unfortunately many JavaScript frameworks and libraries decide to do their "own thing" in pure JavaScript potentially bypassing features that are already available on the browser.

  • Treeshaking is already used to remove unused JS.
  • Minification which produces just functional but not readable JS is standard practice.

So tooling which emits the minimum amount of code necessary to get the job done sounds like the logical next step.

And at the risk of repeating myself:

Object-oriented development is good at providing a human oriented representation of the problem in the source code, but bad at providing a machine representation of the solution. It is bad at providing a framework for creating an optimal solution.

Data-Oriented Design: Mapping the problem (2018)

More than a decade ago part of the game industry, being constrained by having to deliver optimal user experiences on commodity hardware, abandoned object-orientation as a design time representation because the consequent runtime inefficiencies were just too great. In that case it lead to a different architecture - Entities, components and systems ECS - aligned with the "machine" rather than the problem domain.

Similarly in the case a (web) client application the "machine" is the browser. "Components" neither serve the browser nor the user at runtime - so it makes sense to make them purely a design time artefact that gets erased by compilation - or perhaps "components" need to be replaced with an entirely different concept.

Collapse
 
beeplin profile image
Beep LIN

There is always an interesting opinion coming from funtional-programming based community, that is, a stubborn ignorance of performance.

The thing is, most abstractions come with some overhead... And we still use them happily because they are also useful. So I don't see the focus on overhead or performance as particularly interesting in general. But modularity, separation of concerns, cohesion/coupling, declarativeness -- that is the kind of things I think we should think about much more. That would be worth in and of itself a series of articles.

Yes abstractions always bring overhead, but there is a thing called "zero-cost abstraction", in C++ and Rust community. These costs should be paid more on compile-time rather than run-time.

What Ryan is trying to say here is simply this:

  1. For React with vdom, components are cheap during runtime, so we can keep components in runtime for those vdom-based frameworks;

  2. For non-vdom-based frameworks like Solid and Svelt, runtime component interface comes with detectable cost, so we keep components only in pre-compile-time, and eliminate them during compiling, so they vanish in runtime.

This is surly a legitimate argument, taking a little more to compile, achieving better runtime performance, and no harm for modularity, decoupling etc. Very close to "zero-cost abstraction".

Collapse
 
brucou profile image
brucou • Edited

Quoting from a previous reply:

Costs have to be put in front of benefits. For instance, not paying the cost of reassembly (when your small modules become a big one) through compilation may have other costs that are not discussed or obvious; and/or produce benefits that are not worth the trouble. I can't talk about what you are proposing because I don't know what that is.

The general idea to be efficient or economical is a good one, that is the base of any proper engineering approach. But my point is that the devil is in the details.

JavaScript is not a zero abstraction either and you pay most of it at runtime. Should we compile JavaScript to binaries and send that to the browser? Compiling is great, inlining is great, anything to make the code run faster is great but my point is that it is not free. There are tradeoffs and I want to have a look at the full picture before adding yet another layer of complexity in a landscape that is already crowded.

Second line of though: functional UI also ignores components. In fact, Elm has long recommended staying away from arbitrarily putting things in components a-la-React, not because of some artificial FP religious values, but simply because they have found better patterns (that is patterns with better tradeoffs).

Thread Thread
 
beeplin profile image
Beep LIN

JavaScript is not a zero abstraction either and you pay most of it at runtime.

Yes by far JS is the most widely used FP-flavored (partly) language in the practical world thanks to the hard and dirty work by v8 and other engine teams who take performance as a major pursuit rather than ignoring it.

Similarly the react core team does
all the complex and dirty work inside the framework so that we can enjoy the neat f(state)->ui pattern. And yes they are trying all their best to improve performance.

Should we compile JavaScript to binaries and send that to the browser?

Yes that is why we have rust and wasm now and they may bring great changes in the near future.

they have found better patterns (that is patterns with better tradeoffs).

That is the point. In EE we have a concept called gain-bandwidth-product. For a given circuit pattern, the product remains a constant. Increasing gain will harm bandwidth and vice versa. It seems much like the argument that pursuing better performance and less overhead will harm modalarity and neatness. When we have a fixed performance-neatness-product, say, 12, do we choose 2 for performance and 6 for neatness or 4 for performance and 3 for neatness? That is what tradeoff means.

But that is only the beginning of the story. In fact human beings are developing new circuit patterns, inventing new designs, exploring new materials, to achieve better product. The same here. We cannot say vannila JS and react and vue and solid share the exact same performance-neatness-prouct so the only thing that matters is some kind of tradoff. Not true. Framework authors are trying to push the product to higher level. Ryan in this article is trying to point out something that can improve performance without harming neatness. In fact his work can be seamlessly used in xstate or raj or kingly, all tools you mentioned in the functional ui articles. That is pure progress. That is what you called better patterns bringing better tradeoffs.

Application-level engineers like us are mainly accepting a given performance-neatnes-product determined by our infrastructure and making tradeoffs within it. But infrastructure-level engineers like framework authors like Ryan have higher duty to enhance the product for all good.

Thread Thread
 
brucou profile image
brucou • Edited

I feel like this is slowly driving out of topic. The title of this piece is components are pure overhead. Assertion that I reject as far-lacking in nuance. Then "The Future is Component-less" I also reject because once again we have a framework author busy evangelizing his particular vision of the future through gratuitous dramatic click-baity formulas. As much as I like discussing programming topics, and god knows a lot of topics are worth discussing (modularization being a very important one), this kind of gross, ill-founded generalization irks me to no end and takes me away from actually spending my time addressing them.

Regarding performance improvement of libraries, framework, compilers, etc. hats off to all those who are bringing this out. I am glad that they found their calling and that their audience can benefit from their efforts. They generate options and enlarge the solution space. I do reiterate however that performance is just one variable among others and that architects and tech leads need to take an holistic view when making decisions.

I do get the point that you can compile away "components" under some circumstances -- that works for any abstraction (Kingly for instance compiles away its state machines). I do get the point that removing the necessity to create components for other reasons than the benefits of modularity actually frees the design space for the developer. All of that is good. Whether all that actually will be worth pussuing in your specific application under development/team/constraint context is another question. Your mileage will vary.