“The simplest computer is just a light switch.” -- My first boss at a PC repair shop.
It’s amazing how oversimplified this statement seems, which it totally is, but it’s also completely true. A CPU just computes ones and zeros, on or off. But we need our computers to crunch hundreds of thousands or millions of calculations per second. How is this achieved? CPU’s are basically a huge network of transistors, turning on or off. The rate at which they can flip these switches is known as “Clock speed”, measured in Hertz.
CPU’s used to be made with vacuum tubes, but these were large, expensive and prone to failure. Vacuum tubes also used quite a large amount of energy and were readily replaced by transistors when they became available in the mid-1960’s.
Just speaking from a hardware standpoint, you only need the CPU, power supply, RAM, and a motherboard in order to run your computer. RAM (Read Access Memory) is volatile meaning that it is wiped from existence each time electricity stops running through it. There are several different types of RAM, the one that you probably think of when that word is said are the sticks attached to the main motherboard. This is only one type however and their main utility is that they are large, fast, and physically placed close by to the CPU. There are even faster and closer caches of RAM placed on CPU die though. These caches just store the information that the CPU is most likely to need next, like the variables you just declared and assigned. If you want any of this data to be stored after the power is turned off this is where a hard drive would come into play.
To sum up: Once your computer hits some code it comes in as a stream of instructions to the CPU which then utilizes its RAM to store bits for later use.
CPU's can vary in size because the requirements of the CPU may be different. In the picture above the CPU on the right was made to support larger amounts of memory and stores more physical cores on the die itself.
Before we get into single threading and multi-threading, let's talk about what a thread is in regards to cpu functionality.
“In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system.” -- Wikipedia
Now knowing what a single thread is, you probably now understand what multi-threading is. This is where multiple CPUs (also called “cores”) are placed adjacently on the same die and can crunch calculations from multiple sources at the same time.
From here I’d like to talk about the above stated terms because once we get into multi-threading, we’re talking about completing multiple tasks at the same time.
You can think of parallelism as a cook in a restaurant making dish after dish after dish all day. Maybe they have to switch between cutting celery while the chicken is cooking but they’ll return to take the chicken out of the oven before it burns.
A popular form of parallelism comes from Intel. Their Hyperthreading technology splits a physical CPU core into two logical cores to optimize wait times between tasks.
Concurrency on the other hand is like having multiple chefs in the kitchen cooking completely different meals entirely independent from one another.