Modern computers work on layers of abstraction.
So what is abstraction?
Cambridge dictionary says,
- the situation in which a subject is very general and not based on real situations.
- an idea that develops by looking at or thinking about a number of different things
Hmm. Not very helpful.
So, in this article, I will try to explain it my way.
Computers are pretty dumb machines by themselves.
Since a CPU contains billions of transistors baked into thin wafers of silicon (so small that almost only one electron passes through each transistor), and every transistor is an electrically controlled switch (that is what transistors are: electrically controlled switches) we can implement logic gates with them.
Logic gates are as physics as computer science and I save that for another article.
The key take is that when current passes through one of these gates it is a ‘1’ and where there is no input (to these logic gates) it is a ‘0’.
So we can give instructions to these logic gates by a series of ‘0’s and ‘1’s.
That is it.
On the basic level, they pretty much natively understand a series of ‘0’s and ‘1’s.
Every CPU comes with an instruction set. It is the manual by which we input ‘0’s and ‘1’s to achieve a particular effect.
This is called machine language.
The problem with machine language is that nobody can pretty much write anything useful with it.
01001101000111100010110010011010001111000101101 ....
Game?
That is why scientists thought of a way where we could write these instructions with simpler syntax and still computers be able to understand it.
Enter the world of abstraction.
The first layer of abstraction is the assembly language for that particular CPU.
It is a human-readable form of machine language.
In Assembly, the English alphabets enter the scene.
Each alphabet and character is converted to machine language by a compiler (program) written in the machine language itself.
In assembly language, each instruction is converted only to one instruction in the machine language.
It is pretty inefficient.
So came the next level of abstraction: lower-level languages like C.
In C, every line of instruction is converted to many lines of machine code (language).
A special program called the ‘C’ compiler will convert the instructions written in C to machine code.
C is useful because it takes comparatively less memory.
Memory is another concept, just remember that all these lines and lines of code have to be held up somewhere before (and after) executing. That place is called memory (RAM).
In the past couple of decades, there has been an exponential increase generally in the memory of a computer.
RAM is in GBs now.
So we could afford the luxury to write something like this in a higher-level language.
#Python
print(“Hello World”)
Here, even though it is only one line of code, the Python interpreter (in lieu of compiler) converts this into lines and lines of machine code so that “Hello World” is displayed on the screen.
This is a pretty high level of abstraction.
I will further explain this concept by writing a simple program in JavaScript.
In JavaScript, there is a method (function) called trim().
What trim() does is that it clears the space before and after a string.
So, “ Hello World “.trim() becomes “Hello World”.
Whenever we need to trim a string, we simply need to call this function.
I try this at a lower level in abstraction and implement this myself in JavaScript.
This function customTrim(str) is same as the .trim() method is JavaScript.
See the output:
What is easier?
“ Some thing ”.trim() or the function I have written above?
Abstraction saves us by allowing us to use fewer lines of code.
I hope you have started to understand the concept. Understanding abstraction is pretty much essential in understanding computer science.
Vinod Mathew Sebastian is a Full Stack Developer
Top comments (0)