This is my experience and my opinion. In this piece I will discuss how I got from A-B and the experiences that led me there. There's nothing wrong with the hot loading tools that are out there and many people have done great work on them, respect. They're exceptional tools for certain workflows.
If you love hot loaders and they meet all your needs, fantastic! You can save yourself 7 minutes of reading. You are also, however, free to do whatever you want and execute your agency as you see fit.
- For the purposes of this entry I'm using hotload interchangeably with the auto-refresh.
- This topic will reference angular's ng serve directly
- I'm not going for a pulitzer here, just documenting my thoughts
- Full refund of your ¥0 entry fee upon request
Hotloading is a part of many local development and testing strategies for FE web dev presently. The gist is this:
- an incremental build runs every time a file in the effected area changes
- some running code/trigger/watcher/etc notifies a local server of the new build output
- the server delivers the updates payload to a listening websocket
- the page may or may not autoreload, given the underlying plumbing, configuration, operating system, frequent flyer status, phase of the moon
This description glosses over some icebergs, but is accurate enough for our purposes as this isn't a treatise on the subject. The angular flavor of this is the angular-cli feature ng serve. If you want to learn more, your local lib has an .md file on this you can browse at your leisure.
Rather than yet another recitation of the angular docs (YARD), I'm just going to discuss the high point of what led me to basically give up on the above topic:
More often than not, when hotloading code of any significant complexity, the present state of an application is either not amenable to patching in a hot context, or the application configuration/bootstrap/dependencies will result in an invalid state that is impossible in production or on any system not running under a hotload configuration. Such is the case as an abstract for many hotloading scenarios.
The second scenario that routinely burns me is when I'm inspecting the final output of my code in the UI or debugging: modify some small amount of code and BOOM, page refresh (or some surreptitious portion of my code changed behind the scenes, which isn't really any better). It's at best jarring, and very likely maddening as you not only lose the debug context, but possibly also a carefully constructed edge case you were testing that may or may not be easy to get back to.
Certainly, this is in no small part due to the fact I adore the autosave features in tools like Webstorm/IntelliJ. I do however value that feature over prognosticative reloading.
"You could have easily solved this problem by compiling from source and writing your own drivers" - The Honorable John Q. Neckbeard, Esq., CCNA
In Angular, the configurations out of the box are generally good. There's also cli configuration overrides that you can use to configure the behavior ad hoc:
ng serve // serve vanilla dev env build ng serve --aot --prod // fully production compatible ng serve --live-reload=false // as on the tin /// And most other basic args
As well as angular.json which you can change to further modify both default and some degree of additional behaviors. And then there's lower level changes you can make in the dependent configuration if you want to live dangerously.
Understanding, of course, all of that amounts to a fair amount of effort and mental overhead, of the variety that will need to be replicated across your team to be useful.
All of this boils down to time. Time spent second guessing your tools, time spent mucking with configuration, time spent doing things other than producing product. Your experience will likely be based on what you do; designers and testers will be more likely to find the feature useful than people writing plumbing and structural code.
The problem for me isn't that the functionality doesn't work. It objectively does things I don't need it to do. It does too much, and too much is the default. Anything less is work. That is the problem:
The tool, it does too much.
When I look at my workflow and see what I actually need, I'm left with the following:
- Incremental builds
- Watchers rolling up those incremental builds automatically
- Re-serving of the payload
- Read my damn mind to figure out when I need things reloaded
The reality is what I need is software that will serve my resources on demand. And nothing else. Now as to the how:
- Done - already supported by "ng build --watch"
- Second verse, same as first
- We'll need some software in the middle to make this happen
- A modern miracle known as the refresh button/cmd|ctrl+r/ctrl-f5
The power and the glory
WARNING: The following is only useful if:
- You're serving your application as static resources, like a gender non-specific person of authority in a reporting hierarchy.
- You want to test local things locally
- You want to test remote things locally
- You have time and you are bored
In the realm of interthings, there exists two classes of tools that can be used for this with relative ease:
- A server (express, IIS, tomcat, jetty, node, kestrel, python, two cans and a string, etc)
- A proxy (nginx, charles, fiddler, etc)
For the purposes of this example I'm going to use a proxy, and specifically charles, because I'm on a mac this morning and setting up nginx can be a PITA. Please note that I am not being paid by the guy/gals/elves that make charles, but am not above accepting kickbacks if this generates sales. Also, note that you can do this free with nginx as configuration, in addition to easy and free on Windows. #MacTax #FinishFiddlerForMacTelerik
Charles the Proxy
To accomplish this we're going to use a feature called map local. Map local has docs, which you can read. The short of it is you provide a request configuration and tell it what you want served when that wild request appears, EZPZ.
Start by building your application using ng build --watch (gulp/npm/whatever you do as long as it produces an output folder you can serve):
Next we'll add configuration that points at that location and targets our local env:
The configuration above creates what amounts to a DNS bind on serval.local. It could just as easily be google, or any other domain. We'll talk about why that is useful in a second.
The first configuration value uses /* to map all path entries to the dist folder for our tour of zeroes application. The second, ensures that a call to the domain directly will return the index.html file. This could just as easily be used for namespacing, example if we wanted to locate our local instance at serval.local/toz/, it's as simple as adding that to the path.
What did we accomplish?
At this point we have a local "server" running which is aware of our application's dist folder and can intercept calls to known domains, replying with local assets. We have full control over refreshes, so our debug context is protected.
So, that's cool, now we have a simple, persistent way to serve our content that isn't part of the build pipeline and is pretty easy to set up. But what else can we do with this?
Those of you who were paying attention might have noticed the following text:
and can intercept calls to known domains, replying with local assets
Yes indeed. Not only ficitious domains and local domains, but also REMOTE domains. Tell me, my dear coder, do you often find yourself deploying console.debug output or log instrumentation to non-local environments just to figure out what is going on? Well, with this method, you have a genuine bona-fide electrified six-car method for safely running debug code in any environment, including production. Provided, of course, that code isn't dynamically generated on a server in a way that can't be easily emulated. But that's why we're doing SPAs and PWAs in the first place, isn't it? This is where that whole separation of concerns rubber hits the real-world pavement.
Let's set up an example, my fictitious org has the following domain composition for simple CI/CD:
beta.servalcorp.io -> dev gamma.servalcorp.io -> test servalcorp.io -> prod
Let's set up some config rules to represent our environmental overrides, which we will leave disabled for the time being:
First let's look at the prod environment. Accessing servalcorp.io in chrome dev tools, we see the expected production payload:
So, let's rebuild using ng build (development build), enable the serval.io production rules in map local and reload:
Now, as if by magic, we have the debuggable source we want to see:
What about API calls? We can apply the same pattern for this too (configuration varies by platform). This is useful for not only mocking out data, dealing with services that are faulting, staging E2E tests (nginx recommended here) and generally dealing with network issues altogether. It's as simple as stage your payload, configure your mapping entries and call it a day.
Use your imagination! And, failing that, borrow someone else's.
This is mostly representative of the mental journey I took to get where I am. Around and around, beating my head against a wall, realizing I could go around the wall, and actually ending up someplace better by merit of it. I think hotloading frameworks have their place, just not in my development process.
The tl;dr summary would look something like this
|Pros of using a Proxy||Cons of using a Proxy|
|No hotloading||No hotloading|
|Simpler, segregated experience||Harder to package for distribution with source|
|More granular control of what is served||No single common cross-platform tool|
|No defaults, choose your own adventure||No defaults, choose your own adventure|
|Ability to easily use debug versions of code against upper environments|
|Impromptu load testing|
|Robust request analysis|
|Consistent Pattern on all environments|