DEV Community

Cover image for Concurrency and Parallelism: An Overview
Matheus Costa
Matheus Costa

Posted on

Concurrency and Parallelism: An Overview

Concurrency and parallelism are two important concepts in computer science that are often used interchangeably, but they have different meanings. Concurrency refers to the ability of a system to handle multiple tasks or processes at the same time, while parallelism refers to the ability of a system to run multiple tasks or processes simultaneously. While parallelism requires multiple processors or cores, concurrency can be achieved in a single-core system through the use of multiprogramming.

Single-core systems and multiprogramming

A single-core system is a computer system that has only one processing core. While such a system can only run one task at a time, it can appear to be running multiple tasks simultaneously through the use of multiprogramming. In multiprogramming, the operating system splits up the time available on the single core among multiple tasks, allowing each task to run for a short period of time. This makes it possible for a single-core system to handle multiple tasks at the same time, giving the illusion of concurrency.

An example of achieving this using the real world as example would be of a single parking space. Multiple cars can use it for an amount of time but not at the same time, a car has to leave for other one to use it. In JavaScript it is achieved with asynchronous programming and non-blocking I/O.

Context Switching

Context switching is the process of saving and restoring the state of a task or process when it is preempted or switched out by the operating system in favor of another task or process. In a single-core system, context switching plays an important role in enabling concurrency by allowing the operating system to switch between tasks or processes quickly and efficiently.

Older smartphones and basic mobile phones are often single-core systems, and they use context switching to manage multiple tasks. For example, when you switch from a game to a messaging app, the operating system saves the state of the game and switches to the messaging app.

How to achieve Parallelism

Parallelism can be achieved in a number of ways, including:

  • Multithreading: Multithreading involves dividing a task into smaller threads, which can be executed simultaneously by multiple processors or cores. This can be achieved through software techniques such as multithreading libraries or frameworks.

  • Multiprocessing: Multiprocessing involves executing multiple tasks simultaneously using multiple processors or cores. This can be achieved by using a multiprocessing operating system, which allows multiple processes to run simultaneously.

  • Cluster Computing: Cluster computing involves connecting multiple computers together to form a single, more powerful system. Each computer in a cluster can be used to execute different parts of a task in parallel, improving performance and scalability.

  • GPU Computing: Graphics Processing Units (GPUs) are specialized processors designed for performing graphical operations. GPUs can be used to execute tasks in parallel, improving performance and enabling the processing of complex computations.

Communication between threads

Threads can communicate with each other using a variety of methods, including shared memory, message passing, and semaphores. In shared memory, threads can access the same memory location, allowing them to exchange information by reading and writing to the same memory location. Message passing involves sending messages between threads, allowing them to exchange information without accessing the same memory location. Semaphores are a synchronization primitive used to manage access to shared resources, allowing threads to signal each other when a resource is available or when a resource has been acquired.

Mutexes

A mutex (short for "mutual exclusion") is a synchronization primitive used to protect shared resources from simultaneous access by multiple threads. A mutex allows only one thread to access the shared resource at a time, ensuring that the resource remains in a consistent state. Mutexes are often used to prevent race conditions, which can occur when two or more threads attempt to access the same shared resource simultaneously.

When multiple processes or threads want to print to a shared printer, a mutex can be used to ensure that only one process or thread can access the printer at a time. This ensures that the print jobs are printed in the order they were submitted, and prevents overlapping or garbled print output.

Deadlocks

A deadlock is a situation in which two or more threads are blocked, waiting for each other to release a resource. Deadlocks can occur when two or more threads attempt to access the same shared resource and each thread holds a lock on one resource while waiting for another resource to be released. This results in a situation where no thread can continue execution, as each thread is waiting for another thread to release a resource. To avoid deadlocks, it is important to implement proper synchronization primitives and to carefully manage the order in which threads access shared resources.

Security concerns in threaded systems

Threads can pose a security risk in some situations, particularly when they share memory. For example, consider a browser that uses threads to handle tabs. Each tab runs in a separate thread and shares the same memory, meaning that each tab has access to the information in other tabs. This can be a security risk, as one tab can access sensitive information in another tab, such as a password or other confidential information. To address this risk, it is important to implement proper security measures, such as memory protection, access control, and encryption, to protect sensitive information from being accessed by unauthorized threads.

In conclusion, concurrency and parallelism are important concepts in computer science, and they are widely used in many different applications. Understanding mutexes, deadlocks, and communication between threads is crucial for designing and implementing secure and reliable threaded systems. When working with threads, it is important to be aware of the security risks involved, and to implement proper security measures to protect sensitive information from being accessed by unauthorized threads.

Top comments (0)