Introduction
Viktoriia (Vika) Lurie is the product owner for Module Federation and works for Valor Software. Zackary Jackson is the creator of webpack module federation and principal engineer at Lululemon.
This interview is the second part of the Module Federation v7 featuring Delegate Modules interview released earlier. Read Part 1
Magic comments
Vika:
So, let's start our conversation and talk about magic comments and how it's significant that Rspack added that.
Zack Jackson:
Yeah. So magic comments is pretty much just a way to decorate what webpack should do to provide hints about what's going on when a certain import. You can get around magic comments and do it stuff via webpack rules. But sometimes that's a lot trickier. It's like, do in a case by case scenario, something like a magic comment. There's a couple like we can tell it what chunk it should be called.
So we do have dynamic import, we can define its name that it should be generated as we can tell it to, you know, do like a recursive imports, like import this whole folder, import anything and chunk it all out. But we could say, but don't bundle or chunk any .json files. So and that cannot be done through changing the webpack context through the magic comments. And then the other ones are like webpack, ignore, that's probably the one that I've used the most for, I want to tell webpack, skip messing with this import, leave it as a vanilla ESM dynamic import, and the environment itself will handle it. So does quite a few like bespoke webpack'y things. But the challenges that we've had originally with looking at this for SWC; SWC doesn't have any thing to parse comments, it wasn't considered a valid part of the AST, but ESTree, which is what Babel and webpack are based off with acorn. They use comments as like metadata markers or additional things to perform on something.
So anyway, it looks like they landed comment support for SWC. And now that will unlock the whole magic comment thing, because originally that was the one limits. It's like well, we could implement it, but we wouldn't be able to read any comments and perform anything accordingly. Which isn't a huge deal. But it is really nice to have lots of webpack users implement the magic comments to make webpack do more complex things on a chunk by chunk or import by import basis. So having that means there'll be a lot of feature parity with how certain things work where. Like they depend on webpack importing the mega nav as you know, Mega nav.js, not just 381.js or whatever name that it's going to come up with. So preserving those kind of capabilities. In the parser itself it is a really big bonus. Not having to write everything is regular expressions or rules or stuff like that up front in build. But being able to do this on the fly, it lets us do some interesting things like we can create a loader that creates imports with magic comments.
So you can get into like meta programming. Because now I can say okay, based on what you're doing, we're gonna say, here's a fake file that does these importing things and change how you import it. But we can do that as we're printing out the require statements, not like I have to go and reconfigure Rspack or do something like that. So it offers that really nice capability of generative adjustments to how the applications built while the application is in the middle of building. That's not a wide use case. But when you need it, you really need it. And I think another cool one that we do with magic comments is this one called provide exports. And let's say you dynamic import icons. And it's going to make, you know, 5000 file chunk, because you have 5000 icons.
So usually I would go like import icon slash, wash with care. And then I just download the one icon. But if we get off the index file, I'm going to get everything and if I'm doing a dynamic import, that means I'm going to split that file off and get what I need out of it. But webpack is going to know, what do I need out of it after I've split it off. But with a magic comment, this is similar to our reverse tree shaking ideas. Is it a magic comment? Then you can pass a thing called provided exports or used exports. And you can actually tell it, hey, I'm importing icons, but I'm only using three icons. So when you split this thing, tree shake everything else out of this dynamic import except for these four exports or so that I use. And so that's really powerful to create code splittable tree shakable code in really advanced scenarios where you're trying to lazy load something that's usually a big library but you're only want one thing out of it. And so the magic comments for provided exports, or used exports is super handy.
Reference architecture updates
Viktoriia Lurie:
That sounds really interesting and cool. But let's talk about delegate modules. So, the post we did about delegate modules, it really got a lot of comments and a lot of interest. Can you please share on how you've updated your reference architecture?
Zack Jackson:
Yeah, so I still have delegate modules filed as a beta capability for the Next.js federation plugin. Mostly, I've just left it in there, because our next implementation is the most advanced we've got. And so anything new that we want to do, we can do it on the Next one, quite easily, since we've created kind of our own monster inside of Next that lets us do anything that we want. And it's a really nice platform for reverse engineering Next.js, at this point.
So delegate modules fit in really easily there, because we already had big, like intelligent plugin on top of it to make the Next play nicely. The idea is going to be with delegates is to extract this logic out of it, and move it into one of my other universal packages. That's not directly tied to the Next repo, like I might put some of this in the Node package, or might put it in the utils package or something like that. And that'll give it to everybody. But yeah, the progress so far has been pretty good. We've been able to find a couple of bugs along the way, and how to implement these things, just the right way everywhere. But if you see some of the examples staring with the word delegate on my example folder, then that is a delegate module example.
So we've got a Next delegate and we have just vanilla webpack delegate, which was like the first one that we did, just like test the theory. And then it worked. And so it was like, okay, cool. We'll make one more example, with the delegate module, we're using Medusa and all the vanilla ways. So what delegates has kind of given us so far, and in my examples, like, I think the best one is going to be the Medusa plugin that we use in Federation. So as of yesterday, I have a Next.js app deployed to Vercel. And it's, you know, for federated Next applications. One of them is a shell, and the rest are components or other pages. And now, with those delegate modules, I can go to any pull requests that I have open on Vercel. So a different branch, where it's just it, my delegate module is implemented here, and maybe I forked the branch and I made like a blue version of the header, or something like that to kind of test it out. And then let's say I fork it again, and I make a red version of the header. Now I can go to either original Red or Blue versions of the header. And on either of them, I can go back into Medusa and I can change the version and I can make the blue one be red, or just be nothing and all the other pull requests, I can essentially change them as well. So my open pull requests don't actually mean anything anymore. It's just like a domain that I can go and hit. But effectively, all three pull requests. If I change in Medusa, all three pull requests are going to show me the exact same code change, because I'm able to link it and say well use the header from pull request number two, even though you're currently green, use the one from red and pull it back in and do that on the server or in the browser. So seeing that be manageable was a really really big thing to see because we've mostly only seen Medusa managing things up to the browser. So seeing this now actually go all the way into the server. The server is not responding asking Medusa what to do, and Medusa is telling it what to do and then it comes back to life in the browser. Without any hydration errors or warnings or anything like that it is really really impressive. And then on top of it, the delegate also has like this concept of an override protocol.
So this is very similar to what we wanted to do with Medusa and adding like a Chrome extension, so that I can say, well, this is what Medusa is configured to. But when I go to production, I want to see blue header just for me, nobody else. So we kind of just implemented the poor man's version of it, where I wrote something that we pick up right at the beginning, and then I process it, and then I update how webpack does something and I call hot reload, and then it pushes me through with to the updated site. And so that is, let me now have Medusa managing everything and then right above Medusa, I have if overrides exist in the buffer, read the override, find the current remote entry, that override is forced. So if it's like overrides home, and then here's like, the version and the hash or whatever, then just from a query string, my browser and will change the blue nav back to red or to green. And then if I delete the core string and reload it again, I'm getting the red now. And I'm able to do that across any of the pull requests. Again, we're now have Medusa and I have a way to override before this thing asked Medusa, it'll ask some system that I have on top of it, and then I'll go okay, well, you're not doing anything with it. We'll go to Medusa for the main config. And this is really powerful about delegate modules, because we can keep adding layers above or below it. Medusa can just be one of the calls. And you know, I was speaking to somebody about security and compliance. And they were saying, well, if Medusa got hacked, couldn't somebody do a lot damage, like changing your script URLs? And then it's pointing to your source of truth? And I was like, well, yes, but we've got security layers, several security layers kind of baked in here. But we can also set policies inside the delegate module actually. So we could say, when you asked Medusa for something, check the domain Medusa gave back, is the domain somewhere registered inside of your infrastructure? Is the URL part of your company, like there's no rogue URL coming from some other location? We could have the delegate module, kind of be a safety check and read what Medusa is about to give to webpack and validate, if that should be allowed or no, that shouldn't be allowed. And if it's not allowed, we could always just short circuited again and say, okay, well now fall back to just the stable release, like maybe we have a bucket, like the stable channel that we hard code. And so we know whatever stable release we put up is lululemon.com/you know, remote slash stable slash remote entry.js. And so now I have three mechanisms available to me, I can override it on the fly, I can ask Medusa for it, I can verify what Medusa is doing if I need to do any additional checks. And then lastly, I can also just retrieve, what should I do if both scenarios don't match the requirements, and I can have a third fallback on how to go and do something in there.
But it's three completely different mechanisms on how to acquire the connection interface to the two different webpack containers. So it's just offers a ton of power. Like, I think the way that I would see delegate modules is with it, we could probably create our own metal framework around Module Federation. That's how much power it gives you because it's got middleware in there, if we want to do something, say like Next.js, where every page loads data, and you know, it does that whole thing, we could probably wire a lot of that stuff through the delegate module, if it needs to load data, then we could attach that on. So just what webpack gets is an interface specific to any kind of side effect that we want to analyze or understand or respond to. So if we know hey, the page coming in, it is going to be this type of data fetching page, we could wrap the delegate module to return that kind of construct to for fetching data, like if it was get server props, which is something special in Next. So it's really nice that we have that level of control. It makes me feel a lot like delegate modules is just like Express middleware inside of webpack's require function where you know in between asking for something and getting it back, you can do whatever you want with it. And then finally, you feed it to webpack so it's a ton of control compared to anything we've had before.
Viktoriia Lurie:
Yeah, this one sounds really powerful.
Zack Jackson:
This is probably the the biggest technical unlock since Federation was created. From all the features it's got this is probably the most powerful one made available, which is why I'm so excited about it.
Viktoriia Lurie:
Could you also use something like circuit breakers with delegates module switching a federated remote based on error percentage, or latency?
Zack Jackson:
This was something I was speaking. And it's also kind of where I think Medusa could be useful. Because when we're speaking about a lot of these type of capabilities, the one area that always kind of gets blocked is who ingests the information to respond to it, so I can have a performance monitor. And that's great. And I can either make it trigger something in my CI or do anything like that. But you get to the spate of the problem I find with it is whenever you do things in CI, it's very dumb. Like CI doesn't know much. We've made efforts to do things like static analysis for security, or linting, or other kinds of tools like that. But CI, effectively doesn't understand what's happening, it's just going to do it, do a job. And as long as it doesn't break, doing that job, that's kind of all it knows about. Performance monitoring, on the other hand, might know a little bit more in depth of here's the area or here's where it's tagged to be slow. But it doesn't actually know well, what to do with that. So if it can only send me a very small piece of information, like the header is slow. How do you translate that back into a big company with like, 1000 repos that are created and destroyed all the time? That oh, this map is still out? Or how do you maintain that link? So that you know what they're talking about - here is actually this header over here. So with delegate modules that offers us this option to say, okay, well, we can retrieve some info to understand what our performance looks like and adjust it accordingly. But we need to know somebody needs to be the adjuster almost. So if we use something like Medusa, where we started sending back RUM information to Medusa, Medusa could see, hey, the header was just released, that slowed only the site's down that are using this new pin version of headers.
So now we've reduced the scope, it's not something slowed the site down, it's this release just happened. And everybody who took this really soon saw a similar increase in latency or performance. So now we already have a good understanding of what most likely caused it. And then we've also got a good understanding of what's the impact radius of this. So now I could start reporting say, hey, the navigation is have a performance problem. And it's currently impacting these four applications here. If it's a critical problem, where you could create rules to say, you know, like a threshold for an alert, if it becomes X percent slower, we could say, okay, Medusa sees a big change in it, pin it down to the previous version, and see if maybe do that on A/B test. So set a cookie or something to track and switch this user back with a different identifier to the mitigated mode and makes a 10% of traffic, get that mitigation response. Are we seeing mitigation mode, improved performance, and there's no error increases? If yes, we could then say, okay, push that to all delegate modules, and now we've rolled back the site, but we're able to programmatically do it and almost validate what Medusa thinks it is. Like, you know, it's a self fulfilling validation, you know, we're sending it wrong data and well, let me tweak this, what did that do? Okay, everything went well, let me roll it up. Oh, if we rolled it up long, we suddenly see a problem, okay, undo that option. And it's back to whatever. But either way through delegates, it gives us this these capabilities, where we can now dynamically change how things are done. In the browser, it could be say rolling things back or rolling things forward. On the server side, I think it's a little more interesting, because if we say look at edge workers with Netlify, and Module Federation, we could then measure what's cheaper. Is it cheaper and faster to send a request to another edge worker to print out headers, HTML, and then have Webpack get to have a federated import of header? But we have a delegate module that changes it to not download code, but instead, fetch the HTML and then return it as like module exports a string.
So now I'm importing a string, that's actually the reply from another edge worker. And that becomes the stuff that that other edge worker did all work to make my header. But if that's slow, like if it takes, 50 milliseconds to connect to the header, and we're saying, well, header only takes two milliseconds to render, the system could self optimize and say, well, we've seen that it's actually faster if we just pull the runtime down and run it on this one worker. So we'll do that unless a partner comes under heavy strain. And then you could say, well, in the next invocation, push it back out to another worker. And now we can kind of have an elastic computing system where it can become a distributed parallel computing system, or it can fall back into more monolithic in memory based patterns. But you know, that's something you'd have to usually build a whole big framework around. And you'd have to deploy your application specifically for the limits of workers and stuff like that. With Federation and the Node Federation on Netlify, you can kind of just deploy an app, like in whatever shape that you want. And it will work. So I can deploy this thing to no JS. And I could then say, okay, well, let me push this up to the Edge, and it would work just fine. I don't actually change how I wrote any code, it'll just know it's in the Edge network, and how certain things need to be done are a little different. But I didn't have to design and develop an Edge worker application, I just built the app and let the building tool take care of making sure it runs wherever it's supposed to run. So it gives a ton of flexibility there, even for things like imagine. Edge workers really good, but it's lightweight. So if you have a really heavy task that needs to be done, sometimes it's better to send that back to the Node Lamda.
So this gives us this kind of three dimensional scaling, where we can either scale, you know, horizontally across more workers or, you know, contract down to fewer workers. Or we can also push the computing between Node.js and the Edge on the fly. So now you could have your slow note server does a cold start does the one complex job that it needs to do and then there's another 10 things that could do. And it could say, well, those things have been light in the past, let's send them out to 10 separate workers and process them all in one go. Instead of sequentially, do one, do two, do three, do four, and then send it back. But yeah, so that's like one of like, the more out there, possibilities, but it's definitely something that the design of this delegate system allows for things like that stuff that you previously, that's just not possible to make stuff like that work, especially on like an Edge layer. But for us, it would just be, one NPM package wrapper, like a special delegate module called like, the elastic compute delegate, or whatever. And then that thing's designed know, okay, I can go here, I can go there, I can go wherever. And then how you use this component is similar to normal, like Module Federation patterns that we would want, like how server components would be, you don't make it, you don't send it a bunch of data, you don't pass it context, it's more will serialize, a little bit of data, send it over somewhere else, it will do the work. And the little data that I send it is enough for it to understand what it's supposed to do. But it does its own heavy lifting, fetches its own data, and returns everything back, which is the component level ownership model.
So if you're following that already to make distributed systems more reliable. That also means there's a high chance you could start splitting it across different compute primitives as needed, and actually scale up and down your workload, because it would follow someone to kind of construct and now we're providing that glue code to let something like this happen, which would be very hard to manually do in a like, time friendly way.
Medusa Demo
Viktoriia Lurie:
Thanks for sharing! All right, so now - demo. Speaking of making distributed systems more reliable, we haven't shared about this for a while. Let's do a quick demo of the new reference architecture, and its configuration was delegate modules.
Zack Jackson:
Sure. So um, heads up, I can't show my reference architecture right now, but I have a simpler app that still working and letting me click around so I can go through like a three you know, the each important page nothing super fancy, but it shows the all the parts that we want working.
Viktoriia Lurie:
It works perfectly.
Zack Jackson:
So Medusa has undergone several drastic iterations of improvement. A lot of really good work has been done around the UX and the design of it. I think when this originally started, it was a very simple project. It worked, but it was more like, here's a concept proven, achievable. Not, you could run a real application off of it. And it'd be quick. So, since those early days with the help of Valor team, this thing has really exploded into a nice really first class product. So one of the big things that I just saw is we've gotten this new UML diagram. My old UML diagram was pretty flaky. But it mostly did the job. But this thing is a lot more well laid out, and offers quite a few room like just more room for improvement if we need to continue increasing the amount of data that you can see in the UML view. So you get better views, and better interconnects. Like, it's easier to see who connects to what and things like that. And into the future, we will see a lot more feature capability to be able to come out of the UI that we've laid down here, which I think is the big one is how do we build up a UI that's going to allow us to move forward without redesigning it like five times over? Oh, you know, what's the most complicated use cases, cool, those are far away. Now, let's just make things better. And it trends in this way, that gives us you more power over time.
So wherever UML is in here, we have our dependency table that still shows shop is vending pdp shop and a page map checkout is, you know, title, the checkout page or map. And then home is, navigation, the homepage, and it's page map. And then in here, we also see like who is vending modules, so we can see everybody who shares so we're seeing all of these all offer, this package is shared. So it gives you a nice idea on what's available and what's required in various parts of the application.
We've also got our Node Graph here, which has come a long way as well. A lot more readable. And I love the sizing, that it's scoped to the size, so you better understand how big a remote is, or how many connections are made to a remote over certain other ones. But you know, if I want to see who uses this title component from home, I can click on and I see okay, checkout depends on title. And I could see well, who uses shop, okay, shop is consuming shop as well. And it's also used in home and checkout. And you know, we could go and look at the product page and say, okay, product page is used by shop. But look at shop, we can see shop is connected to nav page map product page, it's connected several parts of the application here. And then you can also go in here and like choose the Node that you're that you're trying to find if you're trying to search a specific note up. So it's also a lot easier to navigate as the systems get much larger. And we can also look at the depth of the nodes, which is a really nice feature to be able to see like how deep down do we get? Or how many or few nodes do we want to display? Magic, you had like 1000 nodes in here, being able to filter the depth, those down would be useful, especially as nested remotes and things like that come along. And we can also filter these things out by direct connections, not 100% sure if those are all wired up yet. Oh, yeah, direction of connections. So then if I have that on there, I can see which direction it's going. It's a little hard to see. But you can see I have these arrows on here. So now I know who's consuming it. And who's providing it which thing is a big, useful thing to know is like, well, not that these two are connected somehow. But like, who is it? Who is it that I need to go and spend? Like if I'm going to change nav, who do I need to go and update? Okay, so I need to go and look at checkout shop and home because nav is going to be impacting these three.
And then we get into our Dependency graph, which is like our old Chord graph that we still have, which is just another way to visualize what's going on to see what are all the interconnects overall, and how everything kind of spreads across and connects to our other dependencies.
So once we've kind of gone through these applications, if we go back to their UML, I could flick into home and go to the remote. And now I've got you know, what are the modules exposed? And where are they on the file system of this repo? If it's requiring anything additional, like anything that's a shared module or something like that, it usually will get listed here. I don't get React or some of the default Next.js things listed because those are considered hardwired to next so we just mark them as external so the remotes don't even worry about negotiating React, because in order to live inside of Next, React has to be there. So we don't really track those kinds of things in here. But if I were to add Lodash, it would pop up and say, hey, you know, this thing requires Lodash, because that's the one like shared vendor or outside package that it's dependent on. We've also got, you know, everything that is shared and network versions that shared out. So it makes it easy to understand who's on what. And we've got the direct dependencies. So this is everything that's listed in your package json, and as well as we can see who consumes it so I can see cool this thing consumed shop, checkout and title. And then up here, of course, I've got my version manager. So I can go in here. And I can choose between, you know, I have a timestamp, I also have the git commit hash, you could have a pull request number, or you could calculate like a semantic version, like you would for an npm package. And those can be listed here as like what you're pinning to. And so then the other thing, as well as we also have, like the version comparison, so you can see over time, how has this container changed, like if we upgrade React, I can see the date that happened. If I change what I'm sharing or consuming, I can see the date that a new shared module was added, or it started importing a new federated module. So I mean, even here, you can see I'm using 2.8.3. and then I'm now using 2.8, beta, beta two. And then over here, and you know, this dependency was 6.1. and now it's 6.2. So it's very useful to see like, well, when did change occur on your dependency tree? And in distributed systems, or even in a single repo, this is really complex to find out when this type of information happens. If I see a bug start occurring in production, well, what happened? Okay, release one out, well, did we only find it now? Or did it actually happen on that release? So you have to dig through the Git history and try and understand what might have happened. But with a view like this would make it a lot more digestible to go in here and see, okay, something's wrong, what recently changed in our supply chain. Okay, somebody updated some cookie utility, right around this time. And imagine if this view take this feature, send to Medusa, imagine if this view had a API connection to Datadog or to Sentry. So you can start to see under every release that gets cut, here are the tags and error types that are coming along, or here are new errors that were never seen before, only when this release showed up. And it helps you to be able to start correlating information. And again, with a lot of tools like Datadog, they aggregate so much about what's going on. But none of these tools natively understand how the application was built, and how it's supposed to behave. Really only webpack has a deep understanding of that. So when you start taking these tools that don't know much, and you apply them to basically an ingest engine that understands the webpack part very well, we can start to draw conclusions about, hey, this is likely this thing. So it just adds a lot of new type of options.
That's been very hard to tame or control, even in a single repo frontend. It's still hard to manage who's using what and where, even with npm packages, we'll who's still using version one of the carousel because we want to remove it from the component library. Okay, now I have to like do a search across 1000 repos and hope that GitHub search is good enough. But if everybody was reporting to Medusa, I could just go into Medusa and say, okay, who uses this package everywhere? Cool, here's the exact file and line that the import is on. And you instantly can know this huge amount of information about your supply chain. So there's two other really big things that have come along recently is we've gotten organizations. So now if you're a company, you can register with an org. And you can provide other roles and permissions to your users under that org and you can start to manage and scope it. So certain users might only be able to have read access to it. And maybe you want to have only your AWS keys or something like that have the write tokens or anything like that to edit or change. You've now got like a policy in there. So it's not just the trust scenario, and you'd be able to scope out certain apps. So hey, you know, the retail group doesn't need to see the North American Webapp group.
So we could have a Lululemon organization, but we could have two separate groups under there that each see everything about what they're doing, but there's no interconnect, so nothing has to be implemented. And another idea is that by putting the ability to put policies around the apps I think was something we've thought about doing possibly in the future. So then you would hold them it's like role based access permissions. So now, you know, if I'm in a bank, and I have accounting, trying to pull in a federated module that's usually on like the public frontend site, you could put security measures in place to say you're not approved to be able to consume that remote from this post, like, there's not allowed to be crosstalk here. So that provides a layer of governance on top of something that's very hard to govern, because I can just go and drop a script in anywhere or add a cookie. And once it's there, it's very hard, you have to do something like, you know, what is it content security policy, that still only works like the domain level, so you either have to build infrastructure to block it behind a reverse proxy. Meanwhile, a lot of it if the glue code was driven by something like Medusa, all that rules could be applied right to the webpack runtime, and it will be much harder to circumvent webpack and reconnect something you're not supposed to, because Medusa is kind of driving the whole graph, and everybody's using this thing. So, you know, it adds a good layer of security, a good layer of separation and multi tenant users. So it's just a lot. You know, it's a big feature of always wanted in here, to go after enterprise customers, where you often can't just do a single login, they're gonna want it behind their SSO, and they're gonna want a org based thing to revoke and grant access to users as they, you know, come and go.
Zackary Chapple:
I would say, I think what you're describing there, to kind of dive in some details, I think part of it is, as a developer, creating a federated remote, you can specify this is just for EMEA, APAC, or internal, this is just for external. And then being able to do that other people can find stuff that's targeted for that. But when they do find something that's targeted for that, that they can't use, they at least understand why, and have a way to then reach out to those teams and communicate with them. Okay, this is labeled as internal only, I need it for my external application, can we either add it to be external?
Flexible Environment Management with Medusa
Zack Jackson:
Or can we intake and basically open a ServiceNow ticket go through the intake saying I request this federated module from a different director, Umbrella. And now that now there's a governance thing in place, you can't just hot inject something or things like that, you still have that flexibility, but it's, you'd have, like your team, and governance know what's happening, which is really hard to do with npm packages, like you'd have to either way, it's a really expensive problem to solve managing code and permissions of who can do what you have to set up your own custom npm registry, or what rules are other things that could be bypassed. And that still is like deploy based, but want to now approve this, I have to do some update to all the code bases, I can't just go to a central engine, and say, yes, so and so authorized apps wants to access this in these environments with this token, and only this token, no other read token is allowed to access it.
So that does provide a lot of real maturity and flexibility, given the wide landscape of how different companies and compliance kind of come together. So then I think the last one, which is really great is we've, for a long time, Medusa supported two environments, development and production, and there's a kind of hard coded into its database. So that sounded good initially, because really call them either in dev mode, I'm in prod mode, but it gets a little trickier with, like staging servers, or things like that, where maybe I want to control the staging environment, or if I have, I have about 15 different environments, they're all hooked up to different backends or versions of say, GraphQL API's or something like that. So maybe they're testing a feature against, say, stage or preview or QA environment, so and so. So might want to just say, okay, in Medusa, if you're this environment, here's your pin controls, here's how you're being managed that I can just say cool, bump stage to the latest now in QA, or bumped some other part of the application stack, not just either development or production. But now I can have multiple layers. And the idea is all the builds would feed into the database. And the build isn't set told that it's production or development in like a hard manner. When you're sending the build to Medusa, you can say, yeah, this is intended for productions. And productions pin, the latest it'll grab this incoming one. But I could also say, well, this is a stage PR, and it would just show up tag the stage, but I could still go into say production and I could see that release in there and I could say okay, use the one that's on stage, and I could connect them, which is again really flexible to be able to add unlimited environments and change there's lock files accordingly. It's very nice. Like, you know, you could create almost like a code freeze environment. So it's still production but you could just call a new one code freeze as soon as we hit code freeze, this is the environment that we're going to be going for. It's the frozen one, which we know is like solid and stable. And we can also set up another environment that's like failsafe.
So if in code freeze, something goes wrong, and we need to rollback, we could like battle test QA, a backup, you know, like, configuration of the application, if we need to do anything emergency, we could just go in one place and say, okay, production, you're now going to read the, you know, backup, frozen backup environment. And now the next invocation that will listen to all of that, but I can go and swap those things out on the fly, and you know, reallocate what this environment is, or create another copy of that environment, apply changes to it, and make it point to, you know, a different, more robust config. Which is, which is really nice. Like, you know, imagine if we had personal environments. So what if I had, like Zack's environment in here. And so then I had an override inside of my initial request. So if I go to production, and I've got the Zack's use X environment tag, then production will do a one off response with paragraph configured the entire app. So I don't have to go and tweak production to see what's going on or override each remote individually. But I could just say, hey, use my personal environment, execute my federation, kind of schema against some Lambda somewhere that's managed by Medusa. But that's also very nice if you want to say have like a personalized thing, like I'm working on four different teams on implementing the same feature. And we're all in separate repos, where we could create, you know, JIRA ticket environment. And now, locally in stage, wherever you're going to have every contributing party's code pulled together just for this features that you can all look at it and work on it easily. And you're all just pointing to an environment that you can then remove later. But you know, it gives you a ton of flexibility to do things like that just reworking or, you know, or other things where I could say, hey, okay, I could use Zack's environment as the connection, and I can open a local tunnel on my machine so you're actually getting your remote is actually my computer's local build serving to you over a tunnel.
Now, if they're connected to my environment, I'm kind of acting as their remote and I can edit things while they're working on my feature, but we can work remotely in tandem. With our changes being pushed and pulled without Git, just every time they press Save, I see the change show up when I refresh my page, I don't have to get pull or do anything. So that's also a really powerful potential impact for stuff like this could help change how we work and collaborate, especially in distributed systems, or in Palaeolithic systems, where there's usually many moving parts that need to come together. But it gets very hard to scaffold, how do those moving parts come together just right for developing whatever the use case is without creating a ton of infrastructure and kind of manual work, to recreate 10 services over here, just that we can customize them? What all I really want is I want to link to 10 different folders than usual. I don't actually need 10 servers available and so on to do that. But traditionally, that's how we'd have to do it within femoral environments.
Exploring the Potential of Medusa and Module Federation in Reducing Deployment Infrastructure Costs for Multiple Environments
Viktoriia Lurie:
And talking about multiple and unlimited environments, how much do you think Medusa and Module Federation can help to save on the deployment infrastructure?
Zack Jackson:
So this has been a big one for me. I've personally been on this. Maybe I'm right, maybe I'm wrong kind of tangent. But I think also, if you've ever read anything from Tyson, he worked with me on Aegis and Node Federation, actually. But Tyson had a really good viewpoint when he started working with federated backends. And he kind of put it as when you have something like Medusa and Module Federation together, the concept of CI starts to lose meaning like there isn't really CI anymore. It's just continuous delivery. And, you know, most of the whole build and deploy infrastructure is kind of eradicated under a system like this, because the whole reason these things get so complicated is because it's all based around uploading a Zip file with everything it needs to this machine. And if you need something else, you have to give it a new Zip file. And so that means you need lots of unique Lambdas so on and so forth, because they can only do one job at a time. But if we decouple the file system from the compute primitive, which is what federation does, you know, in theory, a really large company could have all of their QA, all of their lower environments could just be one Lambda called stage. And every time you hit stage, it becomes a different codebase on the fly just for you, and responds accordingly. I don't need ephemeral environments or anything, because stage doesn't have a file system that it's coupled to, it's pretty much it's kind of a way I think of it as like, imagine, if you have all on GitHub as a Symlink folder.
So then, all I'm doing is saying, okay, for this run, change what this folder links to and go require the same thing. And that's kind of what webpack and Federation is giving us is that ability to say, well, the file systems anything, and we can change it whenever. And if that's true, we don't need hundreds of Lambdas and ephemeral environments, and you know, a big deploy system to manage, because fundamentally, there's just not that much that they need to do, like, I don't need an femoral environment, because the only reason I have one is because I need a different Zip file. So you know, you could just have two Lambdas stage in production, and that would probably handle all of your development requirements for the team of 500. And it's just two Lambdas. So that simplifies everything a ton in terms of maintenance, and offers companies things like the managed model. So similar to like, you know, how Vercel does manage hosting, you just connect the Git repo and you don't have to think about much else. Federation offers you the way to kind of make your own managed service. So everybody wants to, say a Next.js SSR, front end. But what all they really want is they just want to create a page, they don't actually want the whole Next app and to maintain it, and to have a Lambda and all the CI/CD, they just want the page and a little dev environment. And then once that leaves their computer, as long as it runs, that's kind of what everybody wants.
So these kind of avenues allow you to offer that where it's like, hey, you basically are just create React app, upload some static assets and that's the end of anything you do. And there's just one or two servers in here that are actually real server Lambdas. And their only job is to do anything that Webpack tells them to do per execution. So if you have that kind of model, you don't need so much infrastructure, you just eradicate it naturally, there's just not a need for the problem that a lot of heavy, expensive infrastructure solves. Which is what I liked the most about it, because I've always been frustrated out. Why is it so much work, just just to upload some JavaScript. If we think before server rendering, and before no JS, we think like to WordPress, and jQuery, it was super simple. Like you change something in PHP, and you just drag it to the server and you refresh the page and it shows up right away, kind of like hot reloading. Soon as it's up there, you have it. There was no concept of like a build or anything like that. So it's real easy, you just FTP and the next invocation and whatever you've done to the PHP is updated. And then on the front end side, we had stuff like jQuery or whatever, where you could add a jQuery widget to the page. And, you know, I feel like we could make probably sites that couldn't scale forever. But you could create a pretty robust experience quite quickly, just because of how easy these pieces are. There's no builds and wasn't anything complex was no build needed. It's just a couple lines of js. And there we go. And I really liked that model, because it was so simple. Like, you know, it took a couple minutes to upload a frontend, because you know, it was just a folder inside of a PHP server. And it was just some jQuery widgets that are on a CDN. But we lost a lot of that when we moved over to built applications. So you know, where I kind of see all of this as, hey, it just brings us back to a simpler time, but allows us to keep using more advanced systems. But the the kind of the operational expense doesn't have to continue to bloat as the technology becomes more complicated. So seeing the simplify, and you know, if I only have two or three Lambdas, I can now focus instead of on scaling, Lambdas, and managing load balancers and route 53. And all of the other network stuff that comes with it. I could probably focus most of that effort on something like multi region deployments.
So instead of deploying everything to one or two availability zones, which gets tricky to do when you have 40,50,60 different code bases that need to all be deployed multi region, it's just a lot of pieces to manage and a lot of network to repeat 60 times over. But imagine if we only have one or two Lambdas deploying it multi region is just changing the YAML like, you know, the Get lab or the TerraForm file in one codebase. And now I can deploy this application across you know, 50 availabilities in the US. So I could scale it a whole lot faster, a whole lot more than what you usually could, because there's not a big cost of change management anymore. It's kind of managed. So you make the change, everybody gets it, you don't have to ask anybody to go and do it. And they just want to build their page or their feature. That's all they care about. And that's exactly what they get stable place to build the page. But all the management pains is now in a centralized, more intelligent place. So it just makes life easier. Like I can't imagine working on a non Federation powered system after working with one.
Flexible Deployment Strategies with Edge, Node, and Container-Based Systems
Viktoriia Lurie:
Make sense. And would you still need two Lambdas if you're using Netlify Edge?
Zack Jackson:
Possibly not. So I think when it comes down to the Edge, the only thing you've got to think about is what does your application use. So if you need to do something like use fs, which is Node file system, like package accessor thing. If I need to use fs, that's a Node only API and edge workers is a is just V8. So it's just the JavaScript engine of Chrome, it's not actually Node itself, it's just the one JavaScript handler. So it doesn't really know what a require is or things like that. So it depends on what you're trying to do. Some cases, it might be, hey, I need Node to handle like, these three or four pieces of workload. But maybe 70% of the app is just you know, standard React components or something simple, cool, only use Node for what's needed and automatically propagate anything possible out to the Edge. And if you see that the Edge networks are getting slow to reply consolidated back onto one of the onto the Node process.
So now Node doesn't have to wait on a network call to the Edge, it's just in memory, and it can instantly do whatever it wants. But being able to flip back and forwards as needed capability by capability is also a really big deal to be able to have. If we say you have a more like agnostic application, like let's say it's not something like Next.js, which has like a lot of Node specific implementations, then, like if we use Remix, Remix is pretty agnostic from needing Node or running on Dino or so on. So with something like that, I would say with the Federation capabilities on Netlify, you don't really need a Node, like an actual Node server. Unless you need one that makes sense. Like my default way of going would be similar to how I'm approaching Rspack. I'm going to start with Edge. And if the Edge hits its limit, and I need to do this one thing, then I can just switch over this part to Node, but I don't have to re-implement my entire system. Now for Node, it could just be okay, well, this won't work for me any further over here. And I just drop it into a different spot. And I'm still good to go. But I can still move them back and forward in the future. It just keeps that interoperability there. So you can use the system best handled to cater for whatever need you want. Like let's say we used Edge and we had Lambda for a couple of things. And let's imagine we also had Docker. Now we have EC2's persistent compute, that's always online, always hot. We have Edge super close to user, but not extremely, like resource powerful. And we have Lambda, which is kind of like in between, it's cheaper, but a little slower to start. But you know, it's good for like, you know, burst loads.
So now imagine if we have something say like a GraphQL endpoint, and we want to push GraphQL to the Edge. And we see actually, we're not getting the level of caching or optimization that we want with GraphQL at the Edge, because there's too many invocations on different CPUs, so it can't build up an internal cache. So then you can say, okay, well, let's rather run that back on the containers where they're always hot. And they can have a big in memory cache of data and so through systems like this, you could just say, okay, we'll send that over here to the Docker container. And now Docker become GraphQL. For me, and you know, all my rendering, let's move that over to the Edge. And oh, well, this one little Lambda handler needs to do a couple things. It's a bit memory heavy, but we'll put that on Lambda for now. And then maybe if we optimize it in the future, we'll send it back out to another edge. But imagine doing that with almost like a UI where you could just drag and drop bricks into a bucket like I want this remote to run here and that one to run there. And you don't actually have to like think about the networking and the wiring but if it was something as simple as just you know, drag the square onto the type of machine you want it to run. There you go, or possibly a more upgraded one would be a, we try to automatically figure out the best place to run this. And we learn from every successful execution. And we can adjust the how things get computed based on how it's working and find the most optimized path that gives you the most performance. And if something changes in infrastructure, the system could then immediately respond to that change, like an outage on us on AWS. We could say, okay, we'll move on Lambda to Edge, it might not be perfect, but we're just going to reallocate all the compute somewhere where we know to run while AWS is having failures, which is quite nice. And usually that has to be done through like multi cloud. It's all infrastructure based to do that, usually uploading Zip files to several different places. But under this type of model, it's more just well, here's a zombie computer and tell it what to do.
So now all you care about is the will, what's the what's the command that I'm going to tell it take care of at this point in time.
Viktoriia Lurie:
All right, thanks for sharing. This was really super interesting and helpful.
Top comments (0)