DEV Community

Abhinav Kumar
Abhinav Kumar

Posted on

Moore's Law and Programming languages

Since Moore's law is leveling off, now developers will have to level up their game to extract more performance out of same chips. I predict less JS (or Electron) and Python/Ruby/node, and more Go, Rust, Scala (and maybe assembly on the side).

What do you think?

Top comments (13)

Collapse
 
rhymes profile image
rhymes

I don't think we'll go back to Assembler because chips are not getting that much faster :-D

I don't even know if the speed of the actual language will be the sole determining factor in the future, right now it's not, at least for many types of apps.

So yeah, speed is important, but it's not all there is to programming languages, especially in a world of microservices and distributed computing with IO bound apps.

Probably in serverless environment languages with a fast startup time will have an edge but still... Azure only support JavaScript, Google only supports JavaScript and Python, AWS Lambda is the only one supporting a wider array of languages.

Collapse
 
ben profile image
Ben Halpern

Perhaps not speed so much as language design. When parallelism is the name of the game, some languages are much more appealing than others.

That being said, there are a lot of ways to scale horizontally and the language rarely seems to be the actual bottleneck. It's just part of the stew of decisions that needs to be made.

Collapse
 
qm3ster profile image
Mihail Malo • Edited

AWS Lambda, being a real container and not a managed language isolate, should finally allow uploading binaries (as the entry point, no more neon-bindings, srijs/rust-aws-lambda, lando or various other ligma) and optionally provide its events/responses in a binary format like flatbuffer.

Collapse
 
rajadigopula profile image
Raj Adigopula

Functional programming to the rescue. FP is getting more heat because of the failure of moore's law too.Immutable by design => ready for parallelism. Even functional js libs like Immutable Js, Ramda etc. are more popular already.

Another direction to consider is GPU programming, came across Spiral language (github.com/mrakgr/The-Spiral-Language) the other day, aimed at programming for GPUs to leverage parallelism by making functions inline (mimicking church lambas).

Collapse
 
txai profile image
Txai • Edited

I think we are moving more and more towards paralellism. CPUs can't get any faster without getting into heating issues, so vendors are increasing the number of cores. So programmers seeking performance will have to exploit that

Collapse
 
kungtotte profile image
Thomas Landin

CPUs have stopped progressing as fast, but RAM is starting to move forward in a big way.

Once 16-32GB is more common on consumer machines I think we'll see a shift towards VM based languages by using a VM server model where things are kept running in memory and cached when you "close" the app, so it's simply reloaded from RAM when you need it again.

Modern CPUs are plenty fast for the things we do with them today, we don't need Rust to make snappy desktop software and developer time is still very valuable.

Collapse
 
abhinav profile image
Abhinav Kumar

I really wish that Electron will move to a shared-runtime approach. It's more likely now that Microsoft now owns Electron.

Collapse
 
kungtotte profile image
Thomas Landin

Yeah I've had that same thought! I run some Electron apps because their usefulness outweighs the resource drain (i.e. VSCode), but I sometimes opt not to install another Electron app because it feels wasteful to have N apps all running their own copy of it.

Collapse
 
qm3ster profile image
Mihail Malo • Edited

Didn't realize Scala was fast.
And Go's startup time in lambda is as slow as Java.
F# is shockingly fast for its premise though.

As Rust gets more and more ergonomic (and the ecosystem grows), more will be written in it.
But the key to post-Moore's law scaling is microservices and horizontal sharding/load-balancing.

Collapse
 
martinhaeusler profile image
Martin Häusler • Edited

First of all, yes, current CPU aarchitectures won't get much faster, at least not at the same pace than before.

Scripting languages / interpreted languages are not going to disappear. There are plenty of use cases where the need for flexibility outweighs the need for performance. However, they will be pushed out of the "gray zone". For this reason, I don't see NodeJS for serious server backends in the future.

So should we go back to manually writing assembler? No, absolutely not. Modern compilers produce assembly code which is so highly optimized, it would be hard for a human to even reach the same performance, let alone handle the complexity of a real-world application with the limited tools that assembler gives you. Also, if new assembler commands become available, you would have to re-write the entire program manually. With a compiler, I can adapt it to make use of the new instructions, and recompile.

The behaviour is very much like a pendulum. At the moment, we are at the tip of the "let's forget about compilation and interpret everything" side. But things are changing. Look at TypeScript for example. Or Elm. We are going back to the compiled stuff. Oh, hi there, Web Assembly!

Collapse
 
okolbay profile image
andrew

I think routines in BG will be optimized (like java streams api opts for parallel collection processing, when makes sense)
there is a good saying that “the only code we can read is single-threaded one”, I’m not sure who said that )

We’ve been there with java threaded apps, and with inside-out nodejs code, so I guess its time to get back to older ideas of actor model, true OOP (and not class-based programming) when each actor object can run in its own thread

Collapse
 
cathodion profile image
Dustin King • Edited

I think we'll mostly just need to have a good grasp of concurrency (which has already been the case). For a lot of things, you might just be able to throw more cores or more servers at it.

When you need more single-thread performance out of, say, Python, there are ways to do it: PyPy, Numpy, Cython (compiles Python to C or something like that), or calling out to some other library that's already written in a systems language.

If you have these tools in your toolbox, Python might be fast enough for you. There are likely similar tools for Ruby and JS.

Of course, some programmers will still need to write the fast libraries, so there will still be a need for lower-level languages. It's just a question of where on the stack you want to specialize. For applications, Python (or *sigh* other higher higher-level languages, i guess :P ) is probably all you'll need* until you reach a certain scale.

Games might be another story.

Collapse
 
paul_melero profile image
Paul Melero

Who says it's leveling off?

medium.com/predict/moores-law-is-a...