DEV Community

hepisec
hepisec

Posted on • Edited on

Choosing a programming language for bloody Beginners

This post is a sequel to my post 'Programming for bloody Beginners' and will give you an overview on types of programming languages, their purposes and finally how to chose the language to learn.

Overview

On Wikipedia you can find a list with several hundred programming languages. Some of them you may have heard of, others have funny names like 'Brainfuck' and probably the most ones are completely unknown to you.

But what is the difference between all these languages?

You need to know that there are basically two kinds of programming languages. These are compiled languages and interpreted languages. Before you can run a program written in a compiled language you have to compile the program. That means that the human readable source code of your program gets translated to machine code, which your computer can understand. We call this machine code the executable (e.g. an .exe file on Windows systems).

On the other hand an interpreted language requires an additional program to execute your code on your computer. This program is called the interpreter. The interpreter reads the program code and executes its statements.

You might hear the term scripting language as a synonym for an interpreted language and the term script as a synonym for a program written in a scripting language.

So why do these two kinds of programming languages exist?

Both types have their advantages and disadvantages.

Programs written in a compiled language have to be compiled for each targeted computer platform (e.g. for Windows, macOS and Linux). That is because of the different file formats being used for executables on these platforms. It may also be necessary to compile the program dependent on the targeted hardware (e.g. a x86 CPU in your laptop requires another executable than the ARM CPU in your phone). So the distribution of our program may become complicated. But when we have mastered this complexity, we gain an incredible performance of our program. And typically you do not target all platforms and hardware configurations as you write your program to solve a certain problem, which may only exist on a certain platform.

So when we exactly know the execution environment of our program, we should chose a compiled language to have the best performance available.

For interpreted languages someone has already taken the efforts to compile the interpreter for many different platforms and you only have to distribute your source code file. But the process of reading and interpreting the file on every program execution can not be as fast as a compiled program. While due to some optimizations in the interpreter you won't notice much difference during the normal program usage, you will at least notice a difference during the program startup. This is when the interpreter has to read all your program code, a step which is not necessary for a compiled program.

You don't know on which platform your program will be used or you want to target a broad audience? An interpreted language is the right choice for you.

Type-safe vs. weakly typed languages

There wouldn't be as many languages today if programmers only had to chose between a compiled or interpreted language. Another important difference between language is the type system. You can basically distinguish between two type systems: type-safe and weakly typed.

In a type-safe language your variables, function parameters and return values are declared with a certain type. This helps the computer to run the program with efficient memory management. And with type-safety you are less prone to certain kinds of software bugs. But its another feature which adds complexity. You need to know the range of the values you want to store in a variable to chose the correct type. E.g. most languages have different integer types with a different size in memory and thus a different range of integers, which can be stored. Usually integer types are available in 8 bits (called a byte), 16 bits (small), 32 bits (int) and 64 bits (long). In some languages you might also need to declare whether the type is signed (supports negative values) or unsigned (positive values only). As the sign (+ or -) is stored in the left most bit of your variables memory, you lose one bit to store your actual number. The following table shows typical integer types and ranges.

typical integer types and ranges

The same goes for floating point numbers (types called float or double) and others. The type-safety often requires conversion between different types, e.g. to compare two values both must be from the same type. In a weakly typed language you can do something like

if (1 == "1") then [...]
Enter fullscreen mode Exit fullscreen mode

This actually compares the integer 1 with the string "1", which are differently stored in memory. But a weakly typed language automatically performs a conversion to a common type. In a type-safe language the code would look like this

if (1 == parseInt("1")) then [...]
Enter fullscreen mode Exit fullscreen mode

The function parseInt in this example takes a string parameter and after its conversion it returns an integer, which is then compared with the integer on the left side.

If you want an easy to learn language, you should start with a weakly typed language. But if you are not afraid of complex type systems, go with a type-safe language.

(weakly typed languages usually have other pitfalls than required type conversion)

Memory-safe vs. memory unsafe languages

One more feature to check is memory-safety. A memory-safe language takes care of managing all the memory your program requires to run. That means you don't have to care much about the size of your types and data, which you store in memory during the program execution. This makes learning a language very handy. With a memory unsafe language you have to code the memory management on your own. This might be helpful for performance critical programs but in most cases such self coded memory management is prone to security vulnerabilities and requires some experience.

Programming paradigm

With only 3 options to chose from we still wouldn't have several hundred programming languages. But programming languages are not only created considering the 3 features discussed before. Another very important feature of a programming language is the programming paradigm. It defines how your code is organized. E.g. you could write object oriented code where you define classes with methods and properties or you write procedural code which is executed more or less "from top to bottom" or functional code, where every statement is a function. There are many different programming paradigms, but I'm mostly used to object oriented code. During your career as a developer you'll build your own preference.

Examples

Now we have heard of enough features of a programming language to create a variety of languages. Here are some examples of popular languages and which features they use.

PHP

PHP is my current language of choice for web projects. It is an interpreted language and the interpreter is available on almost any platform. It is typically used in server-side applications. PHP is weakly typed, memory-safe and supports procedural programming style as well as object oriented code. With newer versions the PHP community enforces more type safety within the language.

Java

Java is my preferred language for client-side applications. It's kind of a hybrid language as the code is compiled but it still requires a host application to run the program (that is called the JVM - Java Virtual Machine). So you can compile your application once and run it everywhere where a JVM is available (almost anywhere). Java is type- and memory-safe and strictly object oriented. The JVM abstracts system specific behavior, therefore Java is well suited for platform independent applications.

JavaScript

JavaScript has no relation to Java. As you can guess from the name, it is a scripting language. You are currently using the default JavaScript interpreter of any desktop system, your web browser. So all you need to get started writing a "Hello World" application in JavaScript is a text editor and your web browser. It is a weakly typed memory-safe language and supports different programming paradigms, including procedural, functional and object oriented.

C

C is a compiled language, which is memory-unsafe and type-safe. If you want to write applications for normal users with a graphical user interface, you typically won't choose C. But if you want to develop hardware drivers or very performance critical algorithms, C is the language of your choice. It's a procedural language and has an object oriented descendant called C++.

Conclusion

This post should give you an overview about basic differences between programming languages and hopefully you are now able to pick a language to learn.

Top comments (7)

Collapse
 
alldanielscott profile image
Daniel Scott

I'm curious as to why PHP is your preferred choice for web applications. I've used it a lot, and I gravitate towards it whenever I'm developing on a LAMP stack or building on a CMS like Wordpress or Drupal, since it's well supported and well understood, but - frankly - it's an awful language full of bizarre behaviours that cause huge amounts of frustration.

I see that all the cool kids are starting to jump on the node.js bandwagon. Drupal is dying (and rightly so). Wordpress is thriving, but there are noises being made about whether it'll some-day be possible to move away from PHP (not anytime soon I suspect).

Personally, I've always thought server-side JavaScript made an awful lot of sense (learn one language and you can code on both sides of the browser). Plus, other than the whole having-to-deal-with-floating-point-numbers-everywhere dilemma, JavaScript is really quite a lovely language.

Netscape used to push server-side JS decades ago (but hardly anyone used their web server), and Microsoft had their own "JScript" implementation back in the classic ASP days but tended to push VBScript far more heavily because ASP developers tended to be VBA developers who'd been tasked with "building a website".

Node.js seems to be the "third-time lucky" for server-side JS and I'd be inclined to take a good-hard look myself.

Currently, if I'm building something bespoke, I'll tend to gravitate towards ASP.Net MVC - because it's lovely. This seems to be creeping into the open source space now that Micro$oft are doing some good work with the whole ASP.Net Core thing. Working in C# (a high-level compiled language) is sooooo much more productive for me than horrible PHP. All the dumb mistakes you might make get caught early in the coding process, leaving far fewer issues to track down and fix at runtime. I've had some very choice monosyllabic words for PHP (and it's designers) leading up to project deadlines before (while tracking down bugs that never should have been able to happen).

I tend to use PHPStorm when I write PHP, and it tries its best to work like an IDE for a compiled language, but all the idiotic things that PHP does (and encourages developers and designers of PHP frameworks to do) tend to hobble its efforts. True, server-side JS will share a few of those issues, but at least the language wasn't designed by the brrrrp who designed PHP.

Collapse
 
hepisec profile image
hepisec

I agree that PHP has its flaws, but if you work with a framework like Symfony, you get an awesome toolset to quickly develop enterprise grade web apps. What I noticed from my students is that they need to understand the difference between server side and client side code. When they switch languages from PHP for server side to JS for client side, this difference becomes more clear.

In my own case, I also use PHP because it fits in our company environment.

Collapse
 
lewiscowles1986 profile image
Lewis Cowles

A Little detail-oriented for a new programmer. Perhaps after they have their first program they can focus on typing, compilation etc.

Collapse
 
hepisec profile image
hepisec

Do you think that the post is to hard to understand or are there just too much details in it?

Collapse
 
theminshew profile image
Michael Minshew

I think it was useful, Not sure that a pure total beginner would get it all but its both introductory and a reference and at leasts points to things you'll want to look up down the road. Love this type of stuff!

Collapse
 
vitalcog profile image
Chad Windham

Kinda depends on the individual beginner. In my early days as a complete noob (right now I'm upgraded to only being a partial noob...) I would have loved that article. Because it is full of really useful terms and any of the stuff I didn't understand I would look up ASAP. But I definitely could see a lot of complete beginners being overwhelmed with the depth of details if they haven't even written "hello world" yet...

Collapse
 
lewiscowles1986 profile image
Lewis Cowles

I understand it, but as someone that helps businesses with people that are new to subjects I thought it was a bit too much at once for beginners that have never coded and don't know where to start.

Perhaps interspersing examples would break it up a bit more? Perhaps less things people can worry about later, such as compiling vs interpreted (IMO everyone should start interpreted because they then don't need to learn about compiler flags, and build-systems nuance).