The Wikipedia entry for static web page starts like this:
A static web page (sometimes called a flat page or a stationary page) is a web page that is delivered to the user's web browser exactly as stored, in contrast to dynamic web pages which are generated by a web application.
Consequently, a static web page displays the same information for all users, from all contexts, subject to modern capabilities of a web server to negotiate content-type or language of the document where such versions are available and the server is configured to do so.
For example, let's have a really basic static website. The only files it contains are
Oh no - every user gets a different map! Quick, call the static website police! Such behavior must not be allowed!
- User-specific information (stored in your browser from previous visits)
- The location of the user
- The date and time
Context is the combination of some or all of these 3 factors; who you are, where you are and when you are requesting a page. With context in play, a website can be anything but static.
So let's have a better, more useful definition. How about
A static website is one in which requests can only be made for read-only server files.
The point of a static website is not to enforce uniformity but to maintain security, increase speed and minimize server processing load. If scripts can't write to the server they can't inject malicious code that spends hours mining BitCoin instead of delivering content when asked. This is a Good Thing.
For most human beings, perception is 90% of reality. We don't question what we already believe so only 10% of what we hear, see or read gets any real scrutiny. The widespread belief that static websites must be simple and unchanging is totally incorrect but if it's not challenged we'll all remain unaware of the very real benefits of using them. So here are 3 false beliefs:
I've already dealt with the assertion in Wikipedia that static websites deliver an experience that's the same for all users. This is only true if we ignore context, as defined above.
Which leads me to a third questionable belief, that client-side processing means slow load times. This one needs a bit of care to unpick as there is a grain of truth in it, but one that's usually so small as to be irrelevant. The problem is that programmers are driven by the need to complete projects quickly, so instead of writing lean code for themselves they reach for standard packages. This may save time but it usually results in far more code than is actually needed to perform the required tasks.
The programmers I meet once a month at CodeUp are mostly either beginners learning Python or experienced people working in big teams. The latter divide between a small group doing regular applications in Java, Python or C++ and a larger group building large websites where Angular and React are the predominant tools.
There's a big difference between coding for a PC and for a browser. In the former case it doesn't matter how big your application gets; all the code is downloaded and installed just once then run locally each time. In a web application, however, bloat should be avoided. Typically, much of your content is finished HTML delivered from a remote server to your browser acting as an over-powered terminal. Everything it needs is supplied each time (though caching reduces the amount of data actually transferred) so the effect of having a lot of bulky code is far more noticeable than for a PC application. It's OK if your server is doing all the page generation but not so good if you're asking the browser to do it.
Things don't have to be this way; it's just convention and there's nothing to stop your content being created by client-side code that will be loaded just once and cached by the browser. In fact, when you're hosted on a static server you can't run code on it so the only option is to do the dynamic stuff in the browser.
One strategy for building a "dynamic" static page is this:
- The JS code runs and immediately requests a pile of resources from the server. Not necessarily everything; just enough to get the initial page up. It monitors the loading processes so it will know when each one has arrived.
- While it's waiting for content to arrive, the JS code builds the DOM for the first screen (if it wasn't included in the HTML). This is quicker than requesting an HTML template and having to wait for it to arrive before you can populate it with data. If you don't need to consider context you can either supply the entire DOM as static HTML or put it into your JS as a string and simply inject it into the page body.
- As the requested resources arrive they are processed according to the business rules for the website and the results injected into the DOM.
Unless you have a particularly heavy first page this will all happen in under half a second; way under the 2 seconds recommended as the maximum for a page to be well-regarded by its users.
Now I freely admit I am not an Angular or React expert. If either of these can do the above then that's great. But bear in mind that they are not small files even before adding all the dependencies that usually go along with them, whereas a hand-built loader such as the above will be well under 50kb. One of its jobs, after the initial file set has been requested, is to call for other JS files to provide the main functionality of the site. These aren't needed until the page is actually visible so why waste time loading them any earlier? The best strategy is "just in time", where everything arrives just as it's needed.
I hope I have successfully demolished a few myths about static websites by showing that they can be highly dynamic and that moving code to the browser need not result in a slow site. Static sites may not handle the needs of the biggest websites but for many projects they are perfectly suitable, and of course the code you write for a static site will run anywhere without any changes being needed.