loading...
Cover image for The Future of "View Page Source"

The Future of "View Page Source"

somedood profile image Basti Ortiz (Some Dood) ・5 min read

The Reality of the Modern Web

The following terms below form a small subset of the extensive vocabulary in modern web development:

  • Compilation
  • Transpilation
  • Minification
  • Compression
  • Code splitting
  • Polyfills and shims
  • JavaScript bundles and chunks
  • CSS preprocessor
  • Frameworks and libraries
  • Build system
  • Search engine optimization

Encountering these terms is inevitable. Nowadays, building serious production-grade applications involves a generous combination of these terms.

This is the reality of the Web as it currently stands. The code we deploy is no longer recognizable from the actual code we write on our machines during development. Various stages of compilation, transpilation, and minification have mangled "production code" in such a way that is network-efficient.

Most of us would reasonably argue: "This is good! The Web is fast, flexible, and portable!" But for others, this is a far cry from the humble roots of the Web when web pages were written by hand using vanilla HTML, CSS, and JavaScript.

My Web Development Journey

Just like many other developers, my passion for programming initially came from a fascination with video games. But four years ago, I had zero programming experience whatsoever. I was too intimidated to unravel the complexity and mathematics behind game development, so I pursued web development instead, hoping that it would be my stepping stone into game development.

My journey began in a classroom. A crowd formed around a classmate of mine because he could magically change the appearance of web pages. Suddenly, he could edit Wikipedia pages on the fly, inserting our names and satirical information into the paragraphs.

I had not known then that he was simply using Chrome's DevTools to manipulate the pages, but it was at this moment when I knew I had to pursue web development.

I asked him how he edited the pages. His response was surprisingly succinct:

"It's easy. Just learn HTML."

Over the coming weeks, I spent much of my time studying the HTML tutorials of W3Schools. Once I got the hang of opening and closing HTML tags, I moved on to learning CSS.

At this point, I wrote many simple web pages in order to apply what I've learned. But the more pages I wrote, the more I longed for interactivity. This was when I finally decided to dive into the rabbit hole that is JavaScript.1

My first few lines of JavaScript were for resuming video playback by clicking a button. By the next year, I moved on from writing web pages to developing Discord bots, which served as my introduction to Node.js and back-end development. Two years later, to make a long story short, here I am now writing this article.

View Page Source

The journey was rough—primarily consisting of long Google sessions and deep Stack Overflow explorations—but I nonetheless pulled through thanks to a variety of online resources and YouTube videos along the way.

Admittedly though, none of this would have been possible if it weren't for that initial spark back when I was first dazzled by the "magic" of manipulating HTML through the DevTools.

Something about seeing the inner skeleton of a web page made it appear so... approachable, as if all the "magic" was lost, but in its place was a beautifully intuitive visualization of page structure.

I remember thinking to myself:

"Wow. I guess web development isn't so bad after all..."

But nowadays, when I open up the page source, all I see is a huge block of minified HTML code, mangled CSS classes, obfuscated JavaScript, and injected third-party trackers.

The Intimidation Factor

This unfortunate reality struck me during a conversation with a friend of mine, where we talked about how intimidating web development had become in recent years given the fierce competition, the endless stream of new frameworks, the complicated development environments, and the cryptic output of "production code".

I recalled that it was exactly this "intimidation factor" that deterred me from game development back then. Suddenly, right under my nose, web development was becoming the same thing: an unapproachable mysterious black box.

In fairness, the Web has indeed become a much more capable, reliable, scalable, and viable platform for modern applications. It's just unfortunate that all the new complexity comes with a hefty price tag.

The Dilemma of the Modern Web

This article was mainly inspired by a thought-provoking talk I watched a while back titled "Keep Betting on JavaScript". Towards the end, Kyle Simpson (the speaker) presented the dilemma of the modern Web between efficiency and simplicity.

"What I'm worried about for the future is that some new aspiring developer—right now he's like 10 or 11 and he's interested in the Web. A couple of years from now when he goes to do a "view source" of the Web, instead of seeing some HTML, CSS, and JavaScript... now he's gonna have some Go, some Rust, some PHP, some JavaScript, and half a dozen of other languages that are all mixed together in a completely nonsensical way... [and] we're gonna say that's the best future for the Web because it's so much more performant, it can do all of these other things—[but] we're gonna forget that people need to be able to onboard to the Web."

It struck me as clear as day that I was once that "new aspiring developer" who looked for a stepping stone into the world of programming. I was once that "new aspiring developer" who was easily dazzled by Chrome's DevTools. I was once that "new aspiring developer" who saw beauty in the Web's simplicity. I was once that "new aspiring developer" who deemed the Web approachable.

Times have significantly changed since then. I can't tell if it is for better or worse, but we can all agree that it is progress nonetheless.

In just a few years, the Web has become a compilation target. For static site generators, HTML is the build target. For UI frameworks, CSS "modules" are built along with JavaScript. For bundlers, compilers, and build systems, JavaScript is the build target. For languages outside the core Web technologies, WebAssembly is the build target.

However, this is not to say that the Web is an "utter mess" right now. After all, we have made a lot of progress in recent years. In fact, some would even argue that the Web has been in its best shape since its very conception.

We just have to be aware that this is the direction we're taking to the future. The decisions we make going forward will pave the path for "new aspiring developers".

Honestly, I don't know if we're building the best path forward for "new aspiring developers" (such as ourselves back then). The dilemma of the modern Web is founded on a trade-off between efficiency and simplicity. In software development, a similar eternal debate exists between application performance and code readability.

With that said, what are your thoughts on the future of "view page source"? What do you think about the Web becoming a compilation target? Are we going in the right direction?


  1. As you can probably tell, I still haven't found my way out of the rabbit hole. 😂 

Posted on by:

somedood profile

Basti Ortiz (Some Dood)

@somedood

Just some dood trying to make code work without bringing the Universe to its demise.

Discussion

markdown guide
 

I think it's a shame, but the specific area I think it's a shame is semantics.
A lot of sites have terrible HTML.

Here's an example, not of the actual source (which is minified), but of what the DOM ends up looking like. It's from Reddit, but that's a fairly random choice on my part:

an example of DIV soup from Reddit's source

Nothing is semantic. You can't reliably interpret this with assistive technologies. You can't do anything much with it apart from throw it at a browser and hope it works. And that's all a lot of developers care about, which is the problem - too many people don't care about the HTML as long as the page looks pretty. In fact, they want it to be difficult to interpret because obfuscation thwarts many ad-blockers. It's security through obscurity, to be sure, but it's there as much to hide potential malware as it is to improve performance.

 

Man, you're right. I haven't even considered the accessibility side of the argument.

I came to write this article from the viewpoint of onboarding people to the Web. Now I see that it extends beyond just that.

 

Thing is, it's a straightforward swap to make components use semantic elements, but because the result isn't visibly different to the majority of end-users, people don't bother.

It's unfortunate how things like accessibility can be swept under the rug for reasons like "priority" and "ticket triage". After all, accessibility features really only affect those who need it, which comprises only a minority of most user bases. It's no surprise that big feature releases are prioritized over "invisible" changes towards better accessibility.

 

I've been a web developer for 15 years. The modern web is kind of a hack.

The web was originally designed to share and link static documents. That's it. And while we've added many things to it, the foundation remains the same. The fact it's grown this much is a testament to the flexibility in its basic design. I'd argue we're pushing its boundaries.

Think about it this way. We're making complex, stateful, dynamic applications on top of text documents. We're programmatically generating these text documents and/or altering their representation in a client application. It breaks very easily and is challenging to optimize.

Before web development I spent 10 years writing traditional client/server applications. Believe me, it had its headaches and drawbacks, but the systems I worked with were built for that purpose, were very efficient, and were relatively simple to debug.

The web wins because it has the best possible distribution model. Everyone already has the necessary client installed. Hopefully we can continue to adapt it more effectively with new standards.

 

We're making complex, stateful, dynamic applications on top of text documents.

This pretty much sums everything up. In all my time here on DEV, this is probably the most insightful comment I have ever read.

 

As a platform, Glitch.com is wonderful as it lets you see the full source, including for the server, and edit it like a Google doc. However, that is only for glitch apps, and like you said, the rest of the web is overwhelmingly minified and bundled. I can only hope ES modules will help change that.

 

ES modules can help with that, but as long as it is more network-efficient to bundle up code, the minified Web is unfortunately not going away any time soon.

From a network standpoint, the main issue with ES modules is the "linked list" of dependencies. The browser will only fetch scripts once it encounters an import statement. This is a problem for deeply nested dependency graphs, hence the popularity of bundlers.

 

Actually I don't think that's how it works. I'm pretty sure it loads and parses the whole thing before it executes. Don't quote me on this

Well, yes. It does. But once it parses an import statement, then it has to fetch the next level of dependencies... and then the next... and then the next... and so on and so forth just like a linear traversal of a "linked list".

Sure, the browser can fetch and parse in parallel, but at the end of the day, code bundles work around this issue by including all imports in a single file. This removes the need to fetch for a "linked list" of nested dependencies (import statements).

However, for common dependencies like lodash, if you use a CDN like pika or jspm, you won't load them, they'll be cached. With bundles, that is simply impossible. Also I as a dev vastly prefer not needing a build step, so I use es modules.

Yes, it would be great if everyone used smart CDNs like Pika, but the reality is otherwise. I do hope for the best, though! 🤞

But I believe you misunderstood me about the network disadvantages of ES modules. ES modules can be represented as a big graph of dependencies.

NOTE: By "dependencies", I mean both external libraries, internal application code, userland modules, and other related components. It is not limited to only NPM modules.

However, the problem with this dependency graph is traversal. When the browser fetches for import statements, it is basically traversing only one level of that big dependency graph. This must be repeated until the whole graph has been traversed, where each node (dependency/script/module) would require a network round trip (assuming the absence of HTTP/2's server push feature).

Even with HTTP caching enabled, this is still the main problem with ES modules. Yes, the network trip has been mitigated, but it would have been more efficient (in terms of CPU cycles) if the browser had just parsed a single bundle instead of recursively traversing an entire dependency graph all over again.

For small sites (with equally small dependency graphs), this would not matter at all. But I would imagine that heavy web apps such as those of Facebook and Spotify would slow down to a crawl if they used ES modules over bundles instead, even with caching enabled.

Caching only goes as far as mitigating network round trips. For large dependency graphs, traversing while parsing syntax can prove to be quite taxing on the CPU. Even more so for mobile devices.

This is why code bundling has become a necessary build step for large applications. Again, smart CDNs can only go as far as mitigating network round trips. It is still more efficient (and battery-friendly) to load a big bundle rather than a deep dependency graph.

Or, use preload links to preload all your JS, and use the await import function as needed, so the browser preloads and caches all your JS, but loads only the minimum first.

I suppose that could work. I can see the appeal behind your method. It's definitely a much better developer experience without the build steps.

Personally, I still wouldn't rely on this behavior if I were to write a large application. Browser support for dynamic imports aside, there just seems to be more runtime overhead with ES modules than if I had just moved the import overhead at compile-time as a build step.

But then again, this is an unfortunate side effect of the "modern Web", where bundling code is just more network- and CPU-efficient than the more elegant ES modules. 😕

Don't get me wrong, I'd love to see a future where the Web is beautifully intuitive and semantic everywhere, but the reality of the situation just deems it otherwise.

ES modules are great, but the current climate of the modern Web forces me to add a tedious build step because it's a "best practice" for network and parser performance.

So yeah... As much as I want to keep my codebases simple like you do, large applications call for such complexities. ES modules are not exactly the most "sCaLaBLe" solution. I'd love to see the day when I'd be proven wrong, though.

 

The browser application has become a compilation target. You are correct. I've worked with JS when jQuery was simply the best thing since sliced bread. I don't write JavaScript for work anymore, but do in personal projects and the ecosystem is dizzying. While I have my complaints, it's progress. I've opened "view source" maybe five times in the last five years because it is no longer approachable. I don't necessarily think this is a bad thing, though. Nobody really intends for source code to be read by the consumer of the end-product/service/client.

But let's step back for a moment -- despite this, JS is still many people's introduction to programming from a non-traditional CS background today, providing one of the tightest feedback loops there is -- write some code, refresh the browser, see something. There are more people learning JavaScript, HTML, CSS across the world than perhaps any other programming language. People don't learn JS starting with Node -- it's almost always from the browser first.

"View source" today is not meant for people to pop open and poke around anymore. Do this on Facebook and a scary warning pops up for those that aren't web devs. There is an argument to be made that perhaps view source shouldn't even be available -- because it's not like other UI clients like iOS or Android apps allow its users to do that.

From where I'm standing, JavaScript and browser-based development has more than a healthy dose of interest and people coming in despite the friction and dizzying array of constructs once one wants to actually get up to speed with modern JS development. Because of the influx of new developers, their voices almost drown out other ideas. SvelteJS is absolutely one of the simplest and fastest UI libraries out there, but people automatically side-step it because the React community is just outsized and smack down every opinion with disdain. Because people don't poke at new things with a healthy intrigue anymore, only few voices are heard. And everything that you've described simply falls by the wayside with newcomers, because it is just "accepted". They know no other way and probably don't even have anything else to compare it to if JS is the ecosystem they are first introduced to. It isn't until this same developer jumps into other languages and platforms and ask, "What's the build tool? What's the transpiler? What's the Webpack here? What's the package.json?" They are usually in for a surprise if stepping into something like Ruby, Elixir, Go, or Rust (if ever). The toolchain can be learned in a day and everything is typically bundled into a single utility (save for maybe Go) and things can be blazing fast with just as tight of a feedback loop, although not as animated as a browser can be.

But all-in-all, I think Web Assembly is paving a future where what is in the browser can be simple and approachable again -- treating it literally as a view layer. With Rust and WebAssembly, we can have our cake and eat it, too. It can write better JS and semantic HTML than we can: youtube.com/watch?v=ohuTy8MmbLc

 

So discounting the "dizzying ecosystem" of JavaScript tools and build systems, would you say that WebAssembly is the right direction for the Web, where the heavy lifting is done by "native" code, while the UI and view layer is managed by simple HTML and JS?

Once the browser support starts coming in, I'd say that's a sustainable future for the Web. Though, I can't help but feel strange about how far it is from its humble origins.

 

I think it’s one direction with some steam, but I hesitate to say that it is the right direction

Well, you see this idea of “humble origins” that you speak of while also true in my case is really a figment of our own imagination. Was this really the intent of its designers or even a goal? I’m not sure. I never looked that that history...

That said, though, I agree — once we’ve got all the browser support, it’s a sustainable future. I think we can finally get to semantic HTML without shoehorning application state into a document and we can again pop open the source and make sense of it again.

 

This is, at the very least, a really good conversation to bring attention to.

 

I think web browsers are moving the same paths (and trying for similar complexities) as mobile dev (iOS, Android), so compilation is inevitable. "View Page Source" / "Inspect Element" is not the goal.

One of the similarities is requiring servers to leverage the work, while servers try to give as much work as possible to the clients (as CPU on the clients becoming cheaper and more powerful).

 

I am beginning to think that the future of the Web utltimatley aims to democratize application distribution, where we would no longer need the approval of monopolies and app stores to launch apps.

The future of the Web is not to host new applications, but to move native apps towards a more "open" platform, hence the similarities you pointed out with mobile app development.

Either way, that is a very interesting point you brought up. Although we can never be sure if this is the right direction, this is the direction we're taking nonetheless.

 

You are right. All platform compatible, with the same standards. Unlike mobile.

Also, not App Store dependent, although currently DNS-dependent...

I guess this just has the consequence that the future of "view page source" is no more.

If the Web will ultimately serve to replace the native mobile platform, then the compiled distribution files should serve no purpose to the end-user as well.

There would be no point in maintaining a "beautiful" page source if it would not matter to the end-user. Everything would be for the sake of network efficiency.

If this is the direction of the Web, I would say "view page source" is as good as dead. But at least it promotes platform-independence, right?

 

Source code of Web Assembly will not be available, or it will be obfuscated. I still like plain JavaScript and Source Maps.

 

Why would you want the clients to access the source maps?

Open sourcing (OSI) is just a movement with criticisms, but it is indeed probably the currently world direction.

 

Better error messages with debug information helps in resolving issues quicker.

I wonder if it can go that way with Android, iOS or even desktop apps as well.

What about Stack Trace?

WASM / compilation shouldn't exclude the possibility.

Yes it is big pain, Stack Trace without line number is useless.. Probably that's the reason people are more interested in React Native and JavaScript based development. We were using Xamarin for app development. And we created Web Atoms JavaScript for mobile app development on Xamarin.

It's an immature technology - sadly compromises had to be made in order to get support out the door.

Source maps are already supported.

 

That's true. I just wanted to point out that the Web has been going in the direction of just "being a compilation target"—WebAssembly being a prime example.

 

Well you will be able to view the WAST, but it won't look like source code unless you are doing very simple things. It's more readable than assembly, but even minified js is more clear.

 

We're going in a horrible direction. Browsers and the web aren't made for the kinds of applications that are beaten into them. But because a bunch of people are hellbent and stubborn on doing it anyway, well... here we are.

 

To be fair to the Web, the main issues were brought about by the fact that the Web just happened to become the most accessible platform in the world. To reach the largest audience, it comes to no surprise that people are so "hellbent" and "stubborn" to push the Web to its boundaries. Sadly, perhaps it's been pushed a bit too far...

 

Don’t think that we are building web pages anymore, at least not the majority of the time. We are building complex applications that execute business logic, deal with asynchronous interactions and are data driven in most cases.
Consider these apps similar to what you can find on the Apple Store or Google Play, but they run in the browser. Although, I do believe that we as web developers can do a better job when it comes to correct usage of HTML5

Not all elements should be a

..