DEV Community

alexgagnon
alexgagnon

Posted on

11ty + Lit, a match made in heaven for simple sites

A demo repository for this post can be found at https://github.com/alexgagnon/11ty-lit.

Before we go further, a little introduction is in order since quite a bit of background knowledge is required to fully understand why we would even use these two technologies and how they work together.

Background

If you already know about Static Site Generators (SSG), Server-Side Rendering (SSR), hydration, Web Components, Declarative Shadow DOM (DSD), and modules and bundling in Node.js vs. a browser, feel free to skip these sections.

Intro

Modern web development has fully embraced component-based design by using self-contained, re-usable, and composable UI elements. This makes it easy to create design systems that are simple to use as a developer, and create a cohesive experience for users. The leaders in the JavaScript ecosystem are React, Vue, and Angular. These are the "big three" frameworks, but there are plenty of others as well, such as some newer ones that are worth mentioning like Svelte and Solid, as well as the one in this post, Lit.

Most JS frameworks provide both a way to easily define components (i.e. JSX for React/Solid), and a runtime that is responsible for updating the DOM to match some desired state using these components. As such, they are purely-based in JS as they use JS APIs to manipulate native HTML elements. Lit, however, takes a different approach.

Web Components

Modern browsers have released 3 technologies that together allow you to create your own HTML elements: Custom Elements, Shadow DOM, and HTML Templates. HTML Templates allow you to define the markup structure of an element, including allowing passing in children from the "light" DOM (a.k.a the main DOM) through a concept known as slots. Shadow DOM gives you the ability to create an isolated sub-DOM for each instance of the template. Since it is a separate DOM from the light DOM, styles and events don't automatically leak through and impact other elements, which is a good thing but takes some getting used to compared to working exclusively in the light DOM like pure JS frameworks do. Custom Elements is the API where you tell the browser which component to use based on a custom tag name, the main function being customElements.define("tag-name", SomeComponentClass) (note that all custom elements must use a hyphen, which identifies them as non-native elements). For example, when parsing <my-button></my-button>, the browser will check to see if it has a custom element registered under the tag name "my-button" and use that. For more information see here.

Lit makes it easy to develop web components since it wraps much of the boilerplate associated with making them into a base class you can extend, LitElement. It also provides some syntactic sugar and tooling for common patterns through decorators and associated libraries.

Some additional nice things about the LitElement base class is that it allows you to define "reactive properties" (properties that cause the component to re-render when changed), and that you can synchronize HTML attributes and DOM properties (with a custom converter if necessary). FYI if you didn't know... HTML attributes are the text-only values in the HTML document, while DOM properties are the JS properties on the DOM nodes that are first created from parsing the HTML document and then are manipulated through JS APIs, such as someElement.textContent = "hello!". Although not unique as pretty much all the JS UI frameworks have similar functionality, it is important since it allows you to create UIs that auto-update based on changing data, and that are easily interoperable with plain old HTML documents.

SSR/SSG

In terms of serving the application, recently developers have seemed to favour using Server-Side Rendered (SSR) applications instead of Client-Side Rendered (CSR) apps, since they tend to perform better on metrics like time to first paint (i.e. when a user actually gets to see something) and SEO. This is because CSR apps often ship a empty HTML document and then fetches a whack of JavaScript to augment the DOM into the desired state. This makes the (large) JS bundle a blocking resource, without any content to display to the user while it loads/executes. SSR apps on the other hand try to pre-render some or all of the page as HTML, leading to a smaller JS bundle that can often be loaded asynchronously (less blocking), and with content that can be immediately displayed to the user or search crawler. An additional step in SSR is to generate sites entirely as static files, in a process known as Static Site Generation (SSG). Since these sites are a collection of files, they don't require a web application server, which makes them easier to distribute (for example they can exist entirely on CDNs). This doesn't mean that SSG/SSR sites don't use JavaScript. It's still used for things like fetching data and DOM manipulation after loading, but at least some content and structural markup is present in the HTML rather than just being an empty document.

Some examples of these build tools/servers are Hugo (Go), Gatsby, Jekyll (Ruby), Hexo, and 11ty for SSGs, and Next.js, Nuxt.js, SvelteKit, and Astro for SSRs. Note that many of the SSR frameworks also work as SSGs, but not vice-versa. The ones that seem to be leading the pack in the JS ecosystem recently are Next.js for React based apps, Nuxt.js for Vue, SvelteKit for Svelte, Astro (which supports several different frameworks), and Hexo and 11ty for SSGs (which tend to target simpler HTML templating systems like Handlebars, Liquid, and EJS, rather than full JS UI frameworks).

But, how do we send components as HTML? This comes down to two factors: 1. how do we serialize a component's definition into HTML, and 2. how do we augment that HTML on the client so that it gets the JS functionality it needs to behave correctly?

Question 2 is done by a process known as 'hydration'. During the build phase, the tools extract the JS related to things like event listeners and lifecycle methods (for example, the code that executes when a components' properties change). On the client, a small runtime identifies the plain HTML versions of these components, and reattaches the JS functionality to them.

Question 1 is usually done by turning the template into a string that can be parsed on the client. For pure JS frameworks like React, the runtime includes its own template compilers which converts the string representation into a format that they can use to create DOM nodes. However, Web Components try to rely on browsers standards rather than shipping their own runtime code. This is where the Declarative Shadow DOM comes in. Since web components use the shadow DOM instead of the light DOM, a way to write native HTML that represents this was needed. The DSD version of a component includes a template element which contains the default markup and styles, allowing the browser to turn it into real DOM nodes. For example, this is the raw HTML of what is sent for the <my-greeting> component we define in the example below (formatted for legibility).

<my-greeting  name="friend">
  <template shadowroot="open" shadowrootmode="open">
    <style>
      :host {
        display: block;
      }

      .blue {
        color: blue;
      }
    </style>
    <!--lit-part ldXQt5xyzcU=-->Hey there <!--lit-part-->friend<!--/lit-part-->! Welcome to <span class='blue'>11ty + Lit</span>. You passed from lightDOM: <slot></slot><!--/lit-part-->
  </template>
  Righto!
</my-greeting>
Enter fullscreen mode Exit fullscreen mode

Chrome and Edge have had this feature for a while, and luckily the other major browsers have recently agreed to support this standard (firefox, safari). In the meantime before they ship, there is a polyfill that can be used, which will be shown in the code below.

JS Engines, Modules, and Bundling

An important consideration for executing JS is where is the code running? In JS engines in the browser, we have access to global objects like window, while in Node.js we have access to Node's standard library which allows interacting with the operating system. These are not interchangeable, there is no global window object when you execute your code in Node.js!

A module can import other modules either through absolute (/) or relative (./or ../) paths such as import {map} from './utils.js' or through "bare" specifiers (i.e. import {map} from 'lodash', notice there's no leading / before lodash). When we run this code in Node.js, its resolution algorithm knows to check the node_modules directory when it sees a bare module specifier. However, this concept doesn't exist when the code is run in the browsers, so we need a way to handle them. We have three ways to deal with this:

  1. replace the bare specifier in place with a path to the package (i.e. turn 'lodash' into '/node_modules/lodash/index.js') in the code.

  2. use the new "import maps" specification which defines the paths for bare modules (this is not well supported yet though, but has some promising upsides like auto-deduping shared modules).

  3. bundle the separate modules into a single module so that no import is needed. This is commonly done by bundlers like Rollup, Webpack, or Parcel.

From a performance standpoint, we want to minimize the amount of bytes we're sending by minifying/uglifying, compressing, and only sending the bytes we need for what's currently needed. For example, we definitely don't want to be sending duplicate code, and we don't want to be sending code for components or other libraries we aren't using on the current page. We also want to make sure we don't create a cascade of blocking resources, where module A imports module B, so it must be fetched and parsed, and then all of module B's imports must be fetched/parsed, etc, creating a waterfall effect. Luckily, ES6 modules and HTTP/2 lessens the impact of this since imports must be statically defined at the top of the module (so we know up front exactly which files are needed) and HTTP/2 allows for sending more files in parallel than HTTP/1. Dynamic imports (meaning loading-on-demand) are also available through the import() statement, if we need finer grained control on when resources are fetched.

Controlling how modules can be broken apart into smaller submodules depending on requirements is a process known as code-splitting. For example, we may want to create an external module containing all our shared dependencies so that we don't duplicate them, or define a module containing only the components required for a particular page. For example, in the main branch of the demo repo below, lit-html is duplicated since it's imported in both lit-element-hydrate-support.js and from from our component. Manually crafting these "entrypoint" modules that import only the code they need for a particular purpose is more complex than just bundling it all together as in option 3. above, but can lead to some substantial performance benefits.

An additional optimization we could do here is to "version" the generated filenames, which means tagging the filename with a unique identifier such as the hash of the file's contents. This is an advanced cache busting technique since resources with new URLs are automatically fetched regardless of their cache settings, potentially saving some round-trip caching checks and preventing stale resources from being served. A couple of pieces are needed to get this to work though, as you need to know everywhere in your source where the initial resource are referenced and then replace their paths with the new filename that was generated. Some build tools can do this because they allow setting HTML files as the entrypoints for module resolution, so they can both bundle the referenced assets AND update their paths in the source to match the output.

In the example below I'll show how these performance optimizations can be done in a separate branch since it makes things more complicated and is not necessary to get the demo working, check the performance-optimizations branch's README for additional explanations.

So, now that we have some background information, the entire purpose of this article is to answer this question...

How do I use 11ty to generate a site that uses web components created with Lit?

But first, why would I use these tools rather than X and Y? As always it's subjective and comes down to preference. For me, they meet the sweet spot of being simple to use, have great documentation and access to help, and are actively developed.

For Lit, it follows browser standards, has a small runtime, and has some officially supported companion libraries that I frequently use (for example, multi-lingual support). Note that Lit also has some nice integrations with Next.js and Astro, if you're looking for an SSR-based approach.

For 11ty, it's super easy to get started and it generates static files, meaning I don't need a web application server to host my site. I like this for two reasons: static files have less of an attack surface so it (should) be more secure, and because in a pinch I can always just modify the HTML directly instead of doing a rebuild. This isn't a huge deal but in the JS world, churn is real, and it can be very frustrating to come back to an application after a few months and try to wrestle with updating dependencies or find out that a package you depended on has been deprecated or is no longer maintained, so it can give a back-door while you figure that out.

I've come to learn that although the final product is not complicated, it is not trivial in terms of knowing how all the parts work together, as the length of the background section shows!

Example

Now to the code... you can view a sample repo here. I've divided it into two npm workspace packages, one that is a bare-bones design system using Lit + Typescript, and the other an 11ty blog site.

To get started just clone, npm i, and npm start

As an aside, if you look through the package.json files you'll see I've used an additional build tool call wireit. This is not a requirement at all, but it lets us chain together npm scripts in an OS agnostic way so is useful for this demo repo, and I've found it quite helpful in other projects as well.

Let's start with the Lit design system. We've created a simple component, Greeting which has a single property name, and a slot to allow the consumer to pass in a child element. We've used TypeScript here and set it up so that consumers of the design system can either reference the source .ts files or the components as JS modules. We configure this in our tsconfig.json file, setting the target property to an ECMAScript version that supports modules. The newer the version you specify, the less code that needs to be transformed, but it may restrict your consumers ability to use your packages if their build tools can't handle the newer features. We've chosen ES2019 as that's the version the Lit team publishes their own packages in. Serving both the TS source files and the unmodified JS modules gives us the most flexibility in allowing consumers to use the components how they want to use them, for example creating granular entrypoints for code splitting and extracting shared code into a single bundle. We also output the typings for use in intellisense through setting declarations: true. Sweet!

An important thing to be aware of is that if you start tinkering with the source is that Lit favours using decorators to keep code nice and tidy and abstract away some of the underlying boilerplate code. While the decorators specification is pretty well finalized, no browser actually implements them so it's up to our build tools to transform this code into standard JS. To complicate things further, the two most common compilers, TypeScript compiler (tsc) and Babel, do not work on the same version of the decorator specification. And some developers use BOTH TypeScript for typing support and Babel for additional transforms. They might also use a higher-level build tool like Webpack or Vite which can uses these compilers under the hood. In this repo we are only using the TypeScript compiler, and we've configured it to handle decorators the way Lit requires by setting experimentalDecorators: true and useDefineForClassFields: false. Additional information can be found in the Lit docs. One issue with just using tsc though is that it processes files in isolation, so you'll notice it adds the decorator code in each component. This is currently being worked on, but in the meantime we will rely on the consumers' build tools to dedupe this code.

We've also re-exported the component(s) in an index file should the consumer want to reference a single module instead. This also makes it slightly cleaner in our 11ty config file, as we'll see below.

Now to the 11ty site.

We need to use a Lit-team supported plugin, @lit-labs/eleventy-plugin-lit to get 11ty and Lit to work together. This is required because 11ty works by parsing the source documents and generating HTML from it. When it hits a custom element like <my-greeting></my-greeting> we want 11ty to be able to detect that it needs special processing. We do this in our .eleventy.js file by requiring the plugin and adding it to the eleventyConfig object. The plugin config property componentModules is an array of modules that contain the component definition and the customElements.define() call, which is what 11ty uses to detect it has to process those custom tag names with the Lit plugin instead. As mentioned above, we have a root index.js file in our design-system that re-exports all our individual component modules, so we can just reference this one file here.

Since these web components were generated with Lit, we also need to add in anything that Lit requires to serialize the custom element to HTML and then hydrate it on the client. One particularly important piece of the puzzle is that under the hood, a dependency of eleventy-plugin-lit called @lit-labs/lit-ssr is responsible for serializing the element into its DSD representation. In order for that to happen, it requires some DOM APIs available on the 'window' object. However as mentioned above, this isn't available in Node.js where 11ty executes. Luckily, the @lit-labs/lit-ssr also includes a minimal DOM shim which creates a minimal window object so that it can work as expected.

11ty works similar to other SSR frameworks in that we can define a common layout that will be used for every page, and slot in our page specific content. The layout is located at src/_includes/default.html which 11ty picks up by default. However if you look at the source of this page you'll notice there's quite a bit going on there. This has been copied from the docs for eleventy-plugin-lit and I've left the comments in as they are quite helpful once you have a grasp of the background information from above. First of all, note that 11ty supports multiple templating engines, and by default files with the .html extension are rendered using the Liquid templating engine, which is outside the scope of this article. Just be aware that in the file there are double curly braces {{ }}, which are what the Liquid engine uses to interpolate computed values. This will be more important in the peformance-optimizations branch where we use it to include dynamically generated filenames.

One thing I did change was the import paths for the resources, because the one from the docs will fail since it tries to resolve bare module specifiers (remember from above than in the browser we need to do some additional work to resolve imports like import {map} from 'lodash', where 'lodash' is a bare module specifier).

If we dig into the source we see we're importing three JS files:

  1. lit-element-hydrate-support.js - this file enables components to automatically hydrate themselves once they've been loaded.

  2. template-shadowroot.js - this file is a polyfill for the declarative shadow DOM. Notice it's only loaded in browsers that don't currently support DSD.

  3. index.js - this is our components file, which contains the definitions and the necessary customElements.define() calls to tell the browser how to manage our custom elements. We could also have done an import for each component if wanted more granularity.

These three files have been generated by Rollup to bundle them into single modules. This not only removes the bare specifiers as mentioned above, but also allows us to save some bandwidth by minifying/uglifying them. We also include <link rel="modulepreload" ... /> so that we instruct the browser to begin downloading the files we know we need as soon as possible: the Lit hydration and design system component modules.

One particular place we can trim some bytes is by minifying the tagged template literals Lit uses, for example the css and html functions. Template literals maintain their whitespace, but this isn't required when it applies to CSS or HTML markup, so we use an additional Rollup plugin rollup-plugin-minify-html-literals to remove it. Note though that this plugin has not yet been released to handle Rollup 3, so I've left it commented out until this gets resolved.

Rollup outputs the files into a dist directory, and then in our 11ty config file we pass-through these files to the generated site.

Everything's together now. We've enhanced 11ty with a plugin so that during generation it knows how to turn web components built with Lit into their DSD string representation. We've included a polyfill for browsers that don't currently have the ability to work with DSD components. We've enabled Lit components to automatically re-hydrate themselves once loaded. And we've created a small Lit-based design system that we can re-use across multiple projects. Looking good!

You can test the reactive properties by opening the console and typing the following:

document.querySelector('my-greeting').name = '<your name>';

As a reminder, there's a separate branch called performance-optimizations where we also try to introduce code-splitting and cache busting, but I'll admit I've got a bit to learn in that department so I'm open to feedback!

An interesting thing to notice is that the behaviour of the app depends on which browser you're using. Right now Firefox and Safari will have a brief period where the component isn't displayed while Chrome and Edge won't. This is because Chromium supports DSD natively while the other two need to load and execute the template-shadowroot polyfill first.

Note that currently there seems to be a bug in the template-shadow root script where depending on timing, an error can be thrown. As a side effect this means the dsd-pending attribute from the body isn't removed, so the body remains hidden. Hopefully this gets fixed soon. Until then, just try refreshing a couple times.

Top comments (0)