DEV Community

Cover image for How Computers Work (Part 1)
Josh Carvel
Josh Carvel

Posted on • Updated on • Originally published at joshcarvel.com

How Computers Work (Part 1)

Intro ☕

As a new developer with no computing background, it's possible to jump into coding without really knowing anything else about computers, besides being a typical user of them. But it can be confusing, and a little embarrassing, to be completely clueless on this topic.

This three-part series takes an enormous topic and gives you the absolute fundamentals that you could learn over a cup of coffee, using language you can actually understand, so you have the confidence to make your home in the world of computers.

This part looks at the fundamentals, Part 2 covers hardware and the CPU, and Part 3 covers software and operating systems.

Let's get started!

Using electricity ⚡

As you know, modern computers are powered using electricity, which is conducted along copper wires in the computer's circuitry. When the computer receives some input (keyboard press, mouse click, voice command etc.) that information is converted to electrical signals, and when we want some output, the electrical signals are converted into something like light (displaying text and images on the monitor) or sound (via speakers).

Measuring the voltage (pressure) of an electrical signal over time gives you lots of different values. A device that uses those values exactly as they are for the output is called analogue, from a Greek word meaning 'proportionate', i.e. the processed values are proportional to the voltage. Old TVs and radios are an example.

Modern computers are digital. They don't just relay information, they give electrical signals meaning. A signal under a certain threshold of voltage can be thought of as 'off', and over a certain threshold is 'on'. The data is reduced to just two values. We can also think of these values as the digits 0 and 1 (hence the term 'digital').

It is possible to use more values in computers, so you may wonder why we just use two. For one thing, it is easier to identify the signals reliably when there are just two. But it also makes it easier to write logic - we'll get to this shortly.

Almost unbelievably, with just two values we can do all the things modern computers can do. But how?

Encoding data 📖

The first thing you can do with 0 and 1 is represent data.

There is a number system based on these two numbers: binary (from 'bi', meaning two). We use binary as a code for all the data we have. We can take words, numbers, audio, images - you name it - and convert it to binary, process it in a computer then convert it back again for the output.

Numbers

We use the decimal number system (from 'deci', meaning ten), which has 10 digits (0 - 9). When we get to 9, we run out of digits. So we imagine columns of digits, that increase in multiples of ten each time (remember long addition/subtraction from school?). If we have '1' in our tens column, and '0' in our 1s column, we call that 'ten' and write it as 10.

In binary, you run out of digits after 1! How do we count to 2?

Well, we still have columns, but they are in multiples of two. We can have 1 in the 2s column, and 0 in the 1s column, which is written 10. It looks the same as ten in decimal, but it's the number two in binary.

So you can represent decimal numbers in binary, it just takes a lot more columns, i.e. digits, to do it. This is how computers process numbers.

Characters

What about words, not to mention punctuation and symbols?

All we need is a code where characters are swapped out for numbers. As computers developed, there were lots of different (incompatible!) ways of doing this.

Thankfully, we now we have a universal standard, Unicode. It's a standard with a governing body (the Unicode Consortium) whose mission is to encode all of the world's languages and symbols, old and new, so all computers can understand and process them the same way.

Side-note: this is how we ended up with emojis. An engineer in Japan in the 90s encoded certain numbers as little pictures for some Japanese phones, so it eventually made its way into Unicode. Years later Apple added them to the iPhone, they became popular and now year after year, the Unicode Consortium begrudgingly adds new emojis to the standard to make them more universal and representative! 😃

Unicode assigns a unique identifier (a number prefixed with U+) to every character in use, such as a lowercase 'a', for example. Computers encode that number in binary (usually using an encoding called UTF-8), and the number is decoded on the way out so an 'a' can be shown on the screen.

Pictures

Digital pictures are divided into tiny picture elements. Merging 'pics' and 'el' (elements), gave rise to the term pixel.

We will leave aside how screens actually display pixels, which can vary. When we talk about a pixel in a digital image, we're talking about an individual element of colour.

Various shades of a primary colour such as red can be encoded as a number between 0 and 255, and thought of as one pixel. Since all colours can be made from a combination of primary colours (red, blue and green), if you combine the 256 possible values for each primary colour you get over 16 million possible colours! (256 x 256 x 256). Adding another value between 0-255 for transparency gives you over 4 billion.

So computers just split images into a lot of pixels. The numerical values for each colour in each row can be stored in a list. The dimensions of the image are also stored so the computer knows where each row ends, and therefore can recreate the image.

The more pixels are stored, the more realistic the image will look at its intended size, though you have to store more numbers so it takes up more memory space. Hence, the graphics on early computers with less memory space looked very 'pixelated', i.e. they had fewer pixels so the pixels were more noticeable to the naked eye.

Bear in mind that with pictures and other types of files, computers can use various shortcuts and tricks to optimise the memory space they have available and make file sizes smaller. This is called compression. For example, an image of a plain black square doesn't need the colour value of black repeated over and over in memory: the fact that it has 'x' many black squares could be encoded instead.

Audio

A sound wave can be recorded digitally by measuring by its amplitude at a fixed interval (the sample rate): a common method takes 44,100 samples per second. That gives you enough data points to recreate the sound wave. The amplitude values are of course stored in binary.

Raw, uncompressed audio data is often stored in a specific format known as a .wav file. The audio can also be compressed to various degrees, for example to create an .mp3 file. mp3 files use lossy compression - this means it discards and approximates some parts of the audio data to make the file smaller. Often the aim is to reduce the parts of the sound wave that are considered beyond the range of human hearing.

Video

Video is basically a sequence of images and can be stored as such. However, if left uncompressed the file size would be impractically large for the average video. Almost all video uses lossy compression for this reason. For example an .mp4 file may be up to 200 times smaller than the original file.

The compression usually works by identifying parts of the image that haven't changed and only transmitting information about the parts that have changed. In many cases it will simply describe transformations that should be applied to these changed parts, rather than describing the whole image over and over. However, if this goes wrong and the transformations are applied over the top of the wrong part of the video, it can look very glitchy!

Bits, bytes, and so on 🗄️

Even at the low levels of computing, humans don't think about data in binary form. This is where units of information come in.

In computing, 1 binary digit (a 1 or 0) is called a bit. It represents one bit of data.

It's not very useful on its own, so you will hear much more often about bytes. Bytes are 8 bits. With that you can represent any number from 0 to 255, which is enough to store a standard English character or a colour value. That's one of the main reasons the number 8 was settled on for a byte.

It's also convenient because it is a multiple of two, specifically 23 (2 x 2 x 2 = 8). Multiples of two are used because binary is based on two digits. The next unit that engineers use is 210, which is 1,024. For convenience, it came to be known as a kilobyte, since a kilo is 1,000 of something in the metric system and it was pretty close to that.

However, people started using kilobyte to mean 1,000 bytes. This generated a debate that goes on to this day. And with the growth of computing capabilities, larger and larger units were needed. In the traditional definitions, we had 1,0242 bytes: a megabyte, 1,0243 bytes: a gigabyte, and now we even talk about terabytes of data on storage devices (1,0244 bytes).

But people, including technical people, still often use these unit names to mean multiples of 1,000 instead, which makes a big difference in how many bytes we are talking about for the larger units. Alternative terms have been suggested for the binary versions (e.g. 'kibibyte' for kilobyte), but it hasn't exactly caught on.

If you think this is confusing, that's because it is. Sorry! Let's move on.

Instructions 📜

We now know how data is encoded as bytes of information, but none of that explains how computers do anything with it. We need instructions.

To do this, we need a slight shift in thinking. Let's think of the value above a certain voltage (1) as true instead, and the other value (0) as false. With that simple shift, we can write logic using a branch of mathematics called boolean algebra, after mathematician George Boole, which is all about performing operations on true and false values.

We can do this because the flow of electric charge (current) in a computer's circuitry can be manipulated depending on whether a true or false value is received by components called transistors. Transistors are made from silicon (hence 'Silicon Valley'), a metal that is a semi-conductor, which can be made to either conduct or not conduct electricity. We can think of it like a switch that is on or off. We pass a value from an input wire into the transistor, and get a value from an output wire.

Clever placement of transistors in a circuit can be used to create components called logic gates. The basic building blocks of computer logic are the following three logic gates:

  • NOT: output is true if input is false.

This is the simplest boolean operation: NOT true means false, and vice versa.

It uses just one transistor in a way that acts like a reversed switch. When the transistor is off, current flows along its normal path to the output wire. But if it's on, current is directed away from the output wire.

  • AND: output is true if both inputs are true.

Here one bit AND another bit must both be true. Two inputs are fed into two transistors that interrupt the flow of current towards the output. Both transistors must be on for current to flow through.

  • OR: output is true if either input is true.

Two transistors are used in parallel: just one needs to be on for current to flow through to the output.

We can use logic gates to make statements like: "if this bit of data is true, AND this other bit of data is NOT true, then send a true value to some other component". The value can be carried to the other component because the output wire would be fed into it.

That particular sort of statement is quite common and so the arrangement of transistors it uses actually form its own logic gate, the XOR ('exclusive or'), which we come to in Part 2.

Of course the computer doesn't 'understand' the logic being expressed or the information passing through, but a human can design the circuitry in a way that is useful to us.

Now imagine you have billions of microscopic transistors in the circuitry. It sounds crazy, but that is exactly what is in your phone, computer or device right now. That's one reason it can carry out so many instructions so fast.

Memory 🧠

Computer logic isn't all that useful if you can't keep values in memory and come back to them later. This is why your computer has memory.

Memory is built from lots and lots of memory cells that can hold onto a 0 or a 1. There are many ways this can be done, all using pretty complicated science, which I'll only briefly attempt to explain.

Each method also has its own trade-offs. Here's some key characteristics memory can have:

  • Performance/speed: Memory which allows you to access values quickly is more expensive, and vice versa.
  • Volatility: If the memory needs a power source to hold values, it is volatile. If not, it's called non-volatile or persistent memory. Confusingly, volatile memory may also just be called 'memory', and non-volatile memory 'storage'.
  • Access method: The main distinction is between random access (access any value at any time) and serial access (go through the sequence to get to a certain bit of data). For example, you can't jump to a certain section on a tape or CD, you have to fast-forward through the other bits first.

Computers use a range of different types of memory internally to balance these factors, and we also have external forms of storage for convenience.

Primary memory

We can call memory that the CPU accesses directly primary memory (there are other definitions of 'primary' or 'main' memory but we'll stick to this distinction here).

First we have the registers built into the computer's processor chip (we cover the processor in Part 2), holding values such as the code corresponding to the next instruction to be processed. Then there are some caches, either within the processor or a bit further away, storing values that the processor is likely to need.

These types of memory are fast, random access and volatile. They use a circuit known as a latch (it can latch onto one of two states). A latch uses logic gates and feeds outputs back into inputs in such a way that we can set and reset the value and the circuit will hold onto it. But even though the circuit can 'hold' a 1 or 0, i.e. a certain level of voltage, it can't do so if no current is supplied, hence it is volatile memory.

Further away from the processor still, we have RAM. This stands for Random Access Memory, though that doesn't really distinguish it from the types of memory just mentioned. Though one type of RAM does use latches, the most common type instead uses a device called a capacitor which maintains electrical charge, though the charge needs to be regularly refreshed. It's not as quick to retrieve values, but still fast enough to run software.

RAM uses a grid with intersecting wires. 'Write enable' wires can be activated for a particular row and column in the grid: a memory address. Then a 1 or 0 can be sent along a 'data' wire to be stored at those coordinates. Storing 1 bit at a time isn't very useful, but lots of grids can be used together, and the same address (i.e. coordinates) can be passed to each grid at once to store a value of say, 32 or 64 bits. The computer uses a different memory location to store all the addresses for when it needs to access the data again.

There is also some form of non-volatile memory which contains important code the computer needs to start up (BIOS, covered in parts 2 and 3). Originally, this was stored with ROM (read-only memory, in its original form), which couldn't be changed at all. Many variants of ROM were produced over time which allowed data to be deleted and written over in some way. The culmination of these efforts is flash memory, invented at Toshiba, in which data is deleted and replaced in blocks, but fewer transistors are used so it is relatively cheap. This is still technically in the family of 'read-only' memory because you can't write an individual bit with one action.

Secondary memory

Your computer needs some non-volatile memory to hold all your data and programs, which it loads into RAM every time computer starts up for quicker access. To this day, many computers use hard disk drives, also just called hard drives. It's a magnetic disk sort of like a CD, in a case with an arm that reads data from the disk sort of like a vinyl record player. The data is stored on the disk using magnetism, where the 1s and 0s are represented by magnetic grains which are polarised one way or the other.

Note that 'discs' (with a 'c' not a 'k') like CDs and DVDs, are similar but use light, not magnetism, to store 1s and 0s, and are known as optical storage. CD-ROMs (read-only CDs) were commonly used to distribute software programs before we could download them over the internet.

More recently, solid-state drives were invented: SSDs. These use flash memory and have no moving parts - they can access and delete data quicker than hard drives and run more smoothly, with less noise. On the other hand, they are more expensive.

You can also get external hard disk or solid state drives for storing a lot of data externally. If you have a smaller amount of data to store, you would probably use a flash drive (a.k.a. 'USB stick' - USB describes the type of connection to the computer it uses). They use flash memory too and are an inexpensive and common form of external storage. The precursor to the flash drive was the floppy disk - a sort of mini hard disk read by a floppy disk drive in the computer. Though now out of use, they are immortalised in the form of the humble 'save' icon.

Conclusion

Hopefully you now have a much better idea of how computers work. Part 2 will look at little closer at computer hardware and how instructions are carried out by the CPU.

First, a brief recap:

  • Our data is represented by binary states which we produce using physical phenomena such as voltage. We encode it then decode it again as output that has meaning to us.
  • Computers use billions of transistors to process these binary values in circuitry we design to implement logic using boolean operations.
  • We can record the values in memory, but there are many types with different characteristics - computers use a variety of types to achieve value for money.

Sources

I cross-reference my sources as much as possible. If you think some information in this article is incorrect, please leave a polite comment or message me with supporting evidence 🙂.

* = particularly recommended for further study

Top comments (19)

Collapse
 
diana75082290 profile image
Diana

Interesting information. Thanks.

Collapse
 
tominekan profile image
Tomi Adenekan

Very insightful, thanks

Collapse
 
joshcarvel profile image
Josh Carvel

Thank you!

Collapse
 
tominekan profile image
Tomi Adenekan

Thank You. :)

Collapse
 
wotzhs profile image
Sean Wong

This is very nicely written & compiled. i am a developer, but never had any formal education about computers in general, so I have always been very curious of how things work the way they work, and your article pretty much gave me more information on what I should read more about :)

Collapse
 
joshcarvel profile image
Josh Carvel

Thank you very much, that's great to hear ☺️

Collapse
 
harrybrook202 profile image
harrybrook202 • Edited

Computers work based on the principles of input, processing, storage, and output. When a user interacts with a computer, they provide input through devices like keyboards and mice. The central processing unit (CPU) then interprets and executes instructions, performing calculations and operations. The computer's memory (RAM) temporarily stores data and programs being used by the CPU. Permanently stored data is kept on storage devices like hard drives or solid-state drives (SSDs).

Collapse
 
keshavadk profile image
keshavadk

Informative and every coder should read this article.

Collapse
 
joshcarvel profile image
Josh Carvel

Thank you, I really appreciate it!

Collapse
 
pedrohasantiago profile image
Pedro S

Hey, there's a small typo here:

Confusingly, volatile memory may also just be called 'memory', and volatile memory 'storage'.

This is a very nice series, thanks!

Collapse
 
joshcarvel profile image
Josh Carvel

Thank you very much for the compliment and for pointing out this mistake! I have fixed it now, I meant non-volatile the second time 🙂

Collapse
 
joshcarvel profile image
Josh Carvel

Thank you for reading 😊

Collapse
 
sanja_kz profile image
Tech Tweets

Wow, compliments on this wonderful learning material (calling it blog post just doesn’t do it justice) - and I have only read the first part. Off to the next :)

Collapse
 
joshcarvel profile image
Josh Carvel

Thanks so much! I really appreciate the feedback 🙂

Collapse
 
sarghed profile image
Sarghe Dana

Great post, comprehensive, useful information.
Thanks!

Collapse
 
joshcarvel profile image
Josh Carvel

Thank you for the feedback! 🙂

Collapse
 
vietvudanh profile image
Viet Vu

3 days ago I spent 2 hours to explain all these to one of my friend who is a data scientist and want to understand more about computer. I wish I explained to they this clearly. It's great!

Collapse
 
joshcarvel profile image
Josh Carvel

Thank you very much 🙂