markdown guide

I think competition is coming. Elm and languages that could target WebAssembly should bring some variety to front end web development.


Kotlin/JS is also a really interesting concept and I'd love to see it get more momentum. Unfortunately, I'm thinking that Kotlin is just too strongly tied to the JVM for it to really get much more use than in multiplatform projects (sharing same data models between front and back end).


Thoughts on this?

JetBrains / kotlin-native

Kotlin/Native infrastructure

official project


Kotlin/Native is an LLVM backend for the Kotlin compiler, runtime implementation, and native code generation facility using the LLVM toolchain.

Kotlin/Native is primarily designed to allow compilation for platforms where virtual machines are not desirable or possible (such as iOS or embedded targets) or where a developer is willing to produce a reasonably-sized self-contained program without the need to ship an additional execution runtime.


  • install JDK for your platform, instead of JRE. The build requires tools.jar, which is not included in JRE;
  • on macOS install Xcode 9.3
  • on Fedora 26+ yum install ncurses-compat-libs may be needed

To compile from sources use following steps:

First, download dependencies:

./gradlew dependencies:update

Then, build the compiler and libraries:

./gradlew bundle

The build can take about an hour on a Macbook Pro To run a shorter build with only the host compiler and libraries, run:

./gradlew dist distPlatformLibs

To include Kotlin compiler…

I'm expecting this to catch on more than Kotlin/JS because of Kotlin's relationship with Android. More likely for people to want to share Kotlin code between Android/iOS than backend/frontend. In my (quite limited) experience, Java-based backends seem to be used more for microservices and APIs, as opposed to full stacks like with Rails or PHP backends, leaving less incentive to introduce Kotlin into a JS codebase.

That being said, I think the popularity of Typescript and Flow show that web developers do want static typing, I'm just not sure how likely it will be for Kotlin to become a popular option.

I feel like any attempt to move Kotlin away from the JVM is a step towards compiling to all sorts of places because the big hiccup when moving off is that you can no longer dip into the Java ecosystem to fill the gaps. It forces certain problems to be solved which could than make the next thing easier.

"That being said, I think the popularity of Typescript and Flow show that web developers do want static typing, I'm just not sure how likely it will be for Kotlin to become a popular option."

Well... Some do. And a lot of web developers really like JS and don't feel like it being a static language adds any real value. I think both sides of the argument make sense. I definitely understand peoples reasons for loving typescript. I work on a huge app built with angular 2+. And I honestly hate typescript. I have yet to see "static typing" solve any problems that arise from bad coding practices that slip through the cracks when you have 200+ devs working on the same app. (which is the promise of typescript). In my real world working experience, typescript is just an extra-bloated superset of javascript that doesn't actually fix any problems. I think the real strength of typescript is it makes people who are very familiar with static languages happier with JS...


I don't like WASM as the end all of the next step in Web performance because it ruins a big part of the openness of the Web. If WASM really takes off, then gone are the days of being able to view-source on a Web page and see how it works. Even a text based bytecode would be better than the current format. Though my real ideal situation is just for JavaScript to become statically typed.

I really did like WebAssembly in concept. Something I've been reading about recently which I will eventually do a full post on is a project by Oracle called GraalVM. In it's core, is basically a framework to create programming languages in such a way that they are interoperable with all other languages made with the framework as well as being immediately compatible to be run on the JVM. Making a JVM is really hard so I really loved this idea. ((the refernce Ruby implementation they made is ~400x faster than real Ruby iirc))

This is what I imagined WebAssembly to be when I first heard about it. A base open format that multiple languages could target to gain the speed benefits that JS engines have already built. Kinda like .NET


It's a good point, but I'd assert that we lost that a while ago. Minification, obfuscation, optimization, etc, mean JS is often as unaccessable as WASM. On the bright side, the WASM tooling is looking pretty good and could help devs make sense of it. Of course, if you're at that point, you're probably past "view source" to make sense of things. I suspect that the way this learning will manifest, going forward, is via OSS (meaning access to the uncompiled source). But it is definitely a good point, and it's the same sentiment that causes me ambivalence around HTTP/2's TLS everywhere (I know there are ways to get around that, too, but they're cumbersome and require you to really know what's going on).

Anyway, I'm a fan of it, b/c I think the browser should be treated like a platform (due to watching a lot of Alan Kay, esp The Computer Revolution Hasn't Happened Yet), so my POV is that it should have always been a bytecode interpreter, and WASM corrects that problem.

I'd love to read your post on GrallVM, btw! I read their paper on Truffle, a while back and what I understood of it sounded astoundingly cool! Also, they've been really kicking ass on the optcarrot benchmark (I don't think it was 400x, but it was impressive enough that I stopped and took note).


Elm could be something, looks like they're doing their own thing, unlike Dart which tried to improve upon Javascript which in my opinion is the reason it hadn't caught up, but I'm not sure about WASM, aren't we kinda making a full circle and going back to nineties with C/C++ for web?


I doubt C/C++ would catch on for the web necessarily, but maybe Rust. It seems like that's the high-perf language that has the most momentum due to novelty/excitement and stewardship from Mozilla.

But JavaScript engines are improved so much always bet on JavaScript .

I agree. JS isn't going anywhere but there is always going to be people who want something very different. I think Elm qualifies as quite different from JS.


TLDR: The web does not need a new language, but instead a new runtime to address the weaknesses of the DOM. Crucially targeting this runtime should build upon and not abandon the best parts of web technologies (simplicity, ease of access).

I don't think JavaScript is really a bottleneck to the performance of most web applications and these modern VMs perform miracles in optimisation. Additionally the language is getting a lot of attention now and with ESLint and Prettier I find it expressive, flexible and pleasure to use.

If all JavaScript was magically rewritten to C++ overnight I am sceptical it would even make that much difference because the real problem or bottleneck in performance is the DOM (not a new or original observation).

The DOM was never designed with performance as a consideration and certainly not with rich web applications in mind. Avoiding excessive repaints and reflows, minimising the number of DOM nodes etc. are all still concerns no matter which library or language you use.

It is also harder for browser vendors to optimise because they must still be forgiving to all of the ambiguous/bad markup and CSS in the world. Compare this to modern JavaScript where the browser does not tolerate errors.

I don't think it will ever happen and would take years, but it would get the biggest win in performance if a replacement for the DOM (a new web runtime) were to be standardised. It is a hazy idea - and I imagine other people have had better thoughts - but it seems like the way to go is compilation...

  • Development would use familiar web technologies with a markup language and a stylesheet language (based on and familiar to HTML/CSS with optimisations).
  • There would then be a compilation step to convert markup and styling documents into standard binary package intended for execution by the (also standardised) web runtime. These could possibly also be transpiled to HTML/CSS for backwards compatibility.
  • HTTP 2 is a binary protocol already... so this would build on top of that as the browser would be executing the binary directly rather than parsing the HTML and building the DOM/CSS.

The main benefit of this approach is it bring the web up to the same level of responsiveness and performance as a native application by removing the bottlenecks associated with parsing and managing the DOM/CSSOM. A secondary benefit is it would remove the need to perform manual performance optimisations like inlining critical CSS and hacks like the PRPL (push, render, pre-cache, lazy load) pattern.

In some ways it would be taking us back to the plugin model - but unlike the plugins it would be an open standard governed by the w3c with an implementation test suite which could be run by all vendors.

To quote Sean Denny from earlier:

gone are the days of being able to view-source on a Web page and see how it works

This is a big concern and to address this the source documents must be accessible from the same location as the binary. There are a couple of possible ways this might be strongly encouraged:

  • Firstly have the servers (Ngnix, Apache Http, S3) perform the compilation as an implementation detail of hosting - thus requiring them to have access to the source files.
  • Secondly make crawlers (Google, Bing) require the source files or mark down sites which don't provide sources.

This idea does sound interesting. I do have a question though: when you do "view source" on the page, I assume it shows the updated HTML source, correct? So would the markup source have to be updated anyway in the browser, would markup viewing not include any updates that have been dynamically done to the page, or would there be no viewing of the markup (in your opinion)?

Also, this does make me want to look into DOM performance. For some reason, JS seems to always be blamed (and I'm also guilty of that). But it does also make sense that markup, which needs to be re-rendered with each update, could actually cause the performance issues.


I do have a question though: when you do "view source" on the page, I assume it shows the updated HTML source, correct? So would the markup source have to be updated anyway in the browser, would markup viewing not include any updates that have been dynamically done to the page, or would there be no viewing of the markup (in your opinion)?

The idea would be that upon opening developer tools the browser would fetch the source documents. I would envision this experience to be akin to how source maps function for LESS/SCSS/transpiled JavaScript - but for markup instead.

The browser would be running native (compiled) code - it wouldn't be parsing the markup, building the DOM etc. As such there would not be the concept of "generated source" as it exists today, but it would still be possible to map the rendered views back to the original (declarative) source documents.

Also, this does make me want to look into DOM performance. For some reason, JS seems to always be blamed (and I'm also guilty of that). But it does also make sense that markup, which needs to be re-rendered with each update, could actually cause the performance issues.

Definitely try profiling the layout and painting performance.
The perception of performance is far more likely in most cases to be layout related than any scripting.


I've been thinking what if browsers adapt Elm completely as a new JS specification. I think there is a chance but it's like 0.01% or higher?


Javascript don't need competition on the frontend, it needs more stricter strict modes. Explained: when static types become a thing, having a higher strict mode should throw an exception when someone not order argument and return types for it's functions explicitely.

But this is highly an aesthetic choice (what newer features we want to code in) and I don't think this will ever come through any commitees. But it would be really nice, just like when you compile your C++ code with -std=c++1z flag, except this solution punish you if you want to use older standards.


Some people don't like PHP, so they go to Ruby or Go or Python, some people hate Javascript but they have nowhere to go essentially. Do you see the problem now?


I thinking realistically about this JS alternatives: it's never gonna happen, that W3C will pick up another language for a browser standard, never, ever. So, do you want a NPAPI solution like what Java did in the last decades? That was great, isn't it? Especially if you are a sysadmin, managing 20+ workstations in a company with different OS + different governmental webapps which requires different java versions to run, with different settings? Feel the pain that sysadmins feel, I also lived in this hell. Upps, luckily that's not possible anymore, because every major browsers, including Firefox, Chrome, Opera, Edge are blocked NPAPI execution and not just that, they removed it from their browsers, so in the last couple of months NOT possible to run Java on the browser anymore. Because, you know, many viruses downloaded into the computers by Java. It was the hacker heavon to have an user installed execution environment on lots of enterprise PCs (JS runs in the browser VM), where you can send any payload you want, after the user accepted the execution. Good bye Java, we will never mis you. But what the government did in this case? Nothing! We run all of these important apps for work in Internet Explorer 11. :) What will happen if Microsoft will release an update to Windows 10, for remove NPAPI support from IE11 as well? Well, probably in this case, every mayor office or government institution, including police, hospitals, army, etc. won't be allowed to update their windows, until not comes a webapp rewrite (which is possible within 2-3 years, in the hurry they work...). Nice, new world isn't it? Thanks to Java and it's never-becoming-a-real-standard execution model. :) Alternative solution is to use an old Firefox/Chrome version as well (I use Firefox 52 ESR in one case, with turned off updating), but luckily, some government java program won't run any other browser, only IE11...

I would be the happiest person if I can program client-side apps using Go (natively integrated into the browser), but as far as Google can implement in Chrome, this still not become a standard by W3C, so it's not worth bothering. Eventually we will have a Webassembly compiler for most of the languages, but at it's current state, without active projects / frameworks, no one using it for everyday tasks, like a client-side rendering library (and I think that's not even possible right now).

The main reason would be to use webassembly for me on the client-side is to protect the source code. If you not program in 3D and you only want to write a client-side 2D apps (like with React, Vue), execution performance is really not that important for companies, instead what is important is product to market ratio. The payload size also important, but I don't believe that an extensive framework (which we didn't saw yet), will be much smaller size in binary then a minified Preact project.


But now they can go and use Elm, Purescript, ReasonML... or maybe something else.

These languages compiled back to Javascript, just like Typescript. I don't see the point to use something else then ES2018 + Flow + Babel or Typescript + Babel. At the end of the day, all of those important type safety features or syntactic sugar will be stripped out from your code, leaving only a major browser compatible, old standard javascript with slightly optimized code.

Actually, there's somewhere a Flow library (sorry, I forget the name), which compiles to client side as well, to not strip out the type checking on the client side (I don't know unfortunately, Elm, Purescript or ReasonML have such feature).

Having another W3C standard language on the client side means that the browsers implement it's VM to execute, meaning don't have to transpile back to Javascript.


Transcompiling from another language to JS is not competition (in my view), is like JS on mobile, low performance, missing features, headaches.

A new language to be adopted native by the browsers ... most likely will not exists (because of the conflicting browser vendors, it's hard to agree on one).

WebAssembly is enlarging the ecosystem, you can use the power of compiled languages, but it is not competition, they are limited in what they can do.

Flash is dead, Microsoft and Google tried to add a competitor and failed.

Time will tell.


I think the best thing for web dev, which would bring us web 3.0, would be for browsers to adopt an additional language natively, or at least to open the API for third parties to do it somehow.


I don't see any good reason to do that, really, but the downsides are immense. It will fragment the market and all the projects/teams.

Devs will jump from one to another like mobile devs do from iOS to Android.
Each library must be implemented twice, each developer must learn 2 languages and 2 x avg(frameworks) count.
Just see what happens now with Android (3-4 ways to make JS apps, Java, Kotlin, C++ and now Dart/Flutter). Multiply that with the web complexity (tools, APIs, frameworks, paradigms, nodeJS, ...).

So I hope not, the web is already a complex ecosystem, with 1 language.
All other industries jumped to JS, not viceversa. You can write neural networks and block chains in JS, browsers didn't adopted solidity or python.

I guess the API you are talking about is the WebAssembly, you can use any language, presuming you can compile it to the standard.


Honestly? No, I don't think it does. It's evolving so quickly and absorbing features from other languages, so I don't think there any technical advantage to writing in another language. Strong typing is basically a static analysis issue that doesn't require browser involvement and already has two good solutions (flow and TypeScript).


IMO, that's an argument in favour of the OP's point. In your mind, JS evolving is a good thing, so we don't need a language to compete with it. But there is another perspective that its evolution is a bad thing and we need an alternative that is small and stable. So, depending on what you prioritize, the JS changes are positive, or they are negative. Hence, it makes sense to decouple the browser from the language and treat it like a platform (ie if it was a VM, then you could tell it where to find the bytecode for your interpreter, and then you could ship code to it in any version of any interpreted language that you wanted -- or compile your language to its bytecode). Then old websites don't break, b/c they can link to the version of JS they were implemented against (exempting changes in the platform itself), and you don't have to wait for browser adoption to ship features, b/c nearly all features are built on the platform, not provided by the platform natively, so you can implement new features and ship them to the browser without depending on the browsers to implement them, and to implement them consistently.


Javascript is a really broken lang by design, it is used just because is the only option, just like VBA in Microsoft Office macros. I think a lot of people would choose another lang if they could.


I generally hear things like 'javascript is a broken language by design' from the same devs who don't code in Javascript and still call it a 'toy' language as if this were still 2002. In 2018 Javascript is a rich ecosystem that can be used in many different ways and styles. It has frameworks that make static-typed proponents happy, you can write in an object-oriented, functional or procedural style, and you can set very clear standards for your team based on your team's preferences. It's really one of the most diverse, flexible, expressive languages in the market today. When I hear 'I don't like Javascript' I translate that to 'My experience has been with teams who use Javascript in a way that I don't like'


As i said, given another option, lots of people would not choose javascript.

And I just don't know that you can prove anything that you're saying. Would lots of C++ and Java devs choose something else for backend programming? Probably. Would lots of front-end programmers choose something else for the front-end? Probably not. Would people with extensive MEAN Stack experience choose something else for the full stack? I know I would if there were a good technical reason to do so, but in general, I would enjoy the benefits of a unified stack. Given that the question is 'does Javascript need competition on the front-end' my personal answer would be no, because fracturing the ecosystem is a bad thing (I lived and coded through the browser wars JScript/JavaScript/ECMAScript drama. I don't want to go back to writing the same code 3 times for different browsers)

So you like javascript for the full stack, lots of people does not but have no choice for the front end. Having options is always good so yes it needs competition.

Sorry i will not spend time talking about the specific poor design choices of js, there is a lot of information out there.


um... no its not. that is the only response I can think of. Javascript is not "a really broken language"


Going against JS in the front-end is counter-productive... All options that worked out are JS derivatives transpiled into JS. And it produces code bloated with polyfills.

Unless we find a way to pre-load polyfills in some way, it's always going to be difficult to bring the whole runtime of a language into the browser at a fraction of second's notice...


Interesting that both use the Google Closure compiler.


Well this is happening already with WebAssembly, I'm not on top of the news when it comes to WA, but as long as you can produce ways to interop WA libraries between languages, it will be fine. For example let's say someone wrote a WA Lib on rust and I want to consume it in my C# App or Elm App (all of them are targeting the browser with WA) in my opinion there should be no hassle on doing so.

Why? well simply because the sole reason javascript has the vast amount of libraries and tools and such a great ecosystem is because everything is interoperable between it (yeah it's a sole language I get it).

But if we start doing WA and we can't have interoperation I think it will bring some harm into the web landscape and cause fragmentation, and niche positions along the industry, and the mainstream development may not be javascript but I believe it will settle down on a main language (resembling how today there is a lot Compile to javascript stuff)


My thought is...


Do you mean something more like "I personally just want another option besides JavaScript"?

I personally am a huge proponent of pushing browsers towards the same specifications. JS is the de facto browser scripting language. It isn't broken (though lots of devs don't like it...) and it's only getting better. I personally love JS for basic web stuff, front-end libraries, AND Node as well. Javascript is only growing and becoming more robust. So why does it "need" competition?


I don't think so. Like CSS and HTML don't need a competitor.
All we need is more features and a more homogeneous support from all browsers


I think it'll be very interesting to see how existing and new languages that work with WebAssembly end up handling the DOM


If you see Google keynote part about WASM they show how to create a C++ program and compile it to web assembly. Basically all Dom manipulation is done in a pseudo-Javascript macro inside the C++ code



I think dart actually fits the bill very nicely...