DEV Community

ta-lim
ta-lim

Posted on

Process and Thread

Summarize from Youtube Videos

Process Management (Processes and Threads) (Youtube)

Process

A process is an instance of a program that is currently being executed. It is an independent entity that consists of an address space, a set of data structures, and one or more threads of execution. The state of a process can be represented by various states, such as running, ready, waiting, and terminated. Process models can vary, but common ones include the parent-child model and the peer model.

On Unix systems, processes are represented by process IDs (PIDs) and can be managed using various system calls and commands, such as fork(), exec(), kill, and ps.

Main Requirements of Process Control

  • Concurrent execution: the ability of the operating system to run multiple processes simultaneously.

  • Sharing resources: processes can share resources such as memory, CPU, and I/O devices.

  • Independent address space: each process has a separate memory address to prevent one process from corrupting the memory of another process.

Process State

The process state reflects the state of the process in its life cycle. There are several commonly known process states, including:

  • New: the process is in the process of being created.
  • Ready: the process is ready to be executed by the CPU.
  • Running: the process is currently being executed by the CPU.
  • Blocked: the process cannot run because it is waiting for a resource or external input.
  • Terminated: the process has finished executing.

Process Creation and Termination

Creation

Processes are created when a program starts or when a program requests to create a new process. A new process has a unique identity in the operating system, including a process ID (PID), address space, and PCB.

Termination

Processes can end for a number of reasons, such as completing execution, being stopped by the operating system, or experiencing failure during execution. When a process ends, the operating system will clean up the resources used by the process, including memory and open files. Processes can also be forcibly stopped by the operating system in certain situations, such as if the process accesses unauthorized memory or if the process experiences a deadlock.

Events that can trigger the creation and termination of a process
Events that Trigger Process Creation Events that Trigger Process Termination
User request Normal completion
System initialization Error
Interrupts Killed by another process
Fork system call User intervention
Exec system call System shutdown

Model Process

Two-state model

Simple model that describes a system or process that can exist in one of two states. These two states are typically referred to as "on" and "off", "up" and "down", or "working" and "not working", depending on the context of the system or process being described.

The two-state model is commonly used in various fields such as electronics, mechanics, and computer science. For example, a light switch can be considered a two-state model, as it can be either "on" or "off". Similarly, a computer network connection can be considered a two-state model, as it can be either "connected" or "disconnected".

The simplicity of the two-state model makes it useful for modeling and analyzing systems that can be described in binary terms. However, it may not be suitable for more complex systems that require more nuanced descriptions of state.

Image description

Five-state model

five-state model is a common model used to describe the various states that a process can be in during its lifecycle. The five states are as follows:

  • New: This is the initial state of a process, where the process is being created.
  • Ready: In this state, the process is waiting to be assigned to a processor. It is loaded into main memory and is ready to run, but the CPU is currently executing another process.
  • Running: In this state, the process has been assigned to a processor and is currently executing its instructions.
  • Blocked (Waiting): In this state, the process is waiting for an event to occur or for a resource to become available. For example, it may be waiting for user input, waiting for I/O operations to complete, or waiting for a signal from another process.
  • Terminated (Exit): In this state, the process has finished its execution and is being removed from the system. The process releases any resources it was using, and its PCB is deleted from the system.

Image description

Two-Level Model

The Two-Level Model is a process state model that divides process states into two levels: User Level and Kernel Level. The User Level states, which include Running, Ready, and Blocked, are managed by the process scheduler. Running is the state of a process that is currently executing on the CPU, while Ready is the state of a process that is waiting to be assigned to the CPU. Blocked is the state of a process that is waiting for an event to occur, such as the completion of an I/O operation.

The Kernel Level states, which include Interrupted, Waiting, and Zombie, are managed by the operating system kernel. Interrupted occurs when a process is interrupted by a hardware event, such as a timer interrupt. Waiting occurs when a process is waiting for a resource, such as I/O, to become available. Zombie occurs when a process has completed its execution, but its exit status has not yet been retrieved by its parent process.

The Two-Level Model provides a clear separation between user-level states and kernel-level states. This allows for efficient management of processes and resources, as the operating system can focus on managing the kernel-level states while leaving the user-level states to be managed by the process scheduler. The model also allows for easy implementation of process synchronization and communication mechanisms.

Image description

Description:

-Created-Process is newly created by system call, is not ready to run

  • User running-Process is running in user mode which means it is a user process.
  • Kernel Running-Indicates process is a kernel process running in kernel mode.
  • Zombie- Process does not exist/ is terminated.
  • Preempted- When process runs from kernel to user mode, it is said to be preempted.
  • Ready to run in memory- It indicated that process has reached a state where it is ready to run in memory and is waiting for kernel to schedule it.
  • Ready to run, swapped– Process is ready to run but no empty main memory is present
  • Sleep, swapped- Process has been swapped to secondary storage and is at a blocked state.
  • Asleep in memory- Process is in memory(not swapped to secondary storage) but is in blocked state.

Some Related Terms

  • Process: A program that is currently running on a computer.

  • Thread: A sub-part of a process that can be scheduled independently by the operating system.

  • Process Control Block (PCB): A data structure used by the operating system to store information about a process, including information about the process state, status registers, and pointers to the memory used by the process.

  • Trace: A component of the operating system responsible for monitoring process activity.

  • Dispatcher: A component of the operating system responsible for allocating CPU time to the process selected by the scheduler.

Threads

A thread is a separate path of execution within a process. It shares the same memory space and resources as the process that it belongs to. Threads can improve performance and responsiveness of a program, especially in systems with multiple CPUs or cores.

Advantages of Using Threads:

A thread is the basic unit of parallel processing that allows multiple tasks to be executed simultaneously within a program. Some advantages of using threads in programming include:

  • Improving Program Responsiveness: By using threads, a program can respond to inputs and events faster, thus improving program speed and performance.

  • Simplifying Programming: In some cases, using threads can simplify the programming process by allowing for separate and parallel management of different tasks.

  • Improving Resource Efficiency: Threads can help improve the use of computer resources, such as CPU and memory, by allowing multiple tasks to be executed simultaneously.

Single Threading Approach

The single threading approach uses only one thread to execute a program. In this approach, tasks are processed serially and one by one. The weakness of this approach is that it is less effective in utilizing computer resources because it uses only one thread.

Multi-Threading Approach

The multi-threading approach allows for the use of multiple threads simultaneously within a program. With this approach, tasks can be processed in parallel, thus improving program efficiency and responsiveness.

Image description

Processes in the Multithreading Perspective

In the multithreading perspective, a process can have several threads working on different tasks simultaneously. These threads can communicate and share resources, such as memory and CPU, to complete the given tasks.

Multicore and Multithreading

Multicore is a technology that uses multiple cores on a CPU to improve computer performance. In combination with multithreading, the use of multicore technology can improve program efficiency and performance by allowing threads to be run on different cores.

Applications Benefiting from Multicore Concept:

The multicore and multithreading concepts can be applied to various types of programs, such as programs that process images, videos, or audio, programs that perform intensive calculations, and programs that require real-time interaction. In these applications, this concept can significantly improve program performance and efficiency.

  1. Image and video processing software
  2. Audio editing and recording software
  3. 3D modeling and animation software
  4. Scientific simulation software
  5. Financial modeling and analysis software

the differences between processes and threads

Processes Threads
Definition A program in execution, consisting of an executable file and associated resources such as memory, system files, and I/O devices. A lightweight process that can execute concurrently with other threads within the same process.
Resource ownership Each process has its own set of system resources, including memory, file descriptors, and sockets. All threads within a process share the same set of system resources, including memory and file descriptors.
Communication Processes typically communicate with each other using interprocess communication (IPC) mechanisms, such as pipes, sockets, or message queues. Threads can communicate with each other by directly accessing shared data within the same process.
Scheduling Processes are scheduled and managed by the operating system kernel, which assigns each process a priority and time slice. Threads are scheduled and managed by the operating system kernel or by the process itself, depending on the threading model used.
Creation Processes are typically created using the fork() system call, which creates a copy of the parent process. Threads are created within a process using a thread library or a system call, such as pthread_create() on Unix-based systems.
Overhead Creating a new process incurs a significant amount of overhead, including memory allocation and copying of resources from the parent process. Creating a new thread is much faster than creating a new process, as the new thread shares the same resources as the parent process.
Isolation Processes are fully isolated from each other, meaning that a bug or crash in one process does not affect other processes. Threads share the same address space, so a bug or crash in one thread can affect the entire process.
Scalability Processes are less scalable than threads, as they require more system resources and have higher overhead. Threads are more scalable than processes, as they can take advantage of multiple CPUs and have lower overhead.

Several to management in LINUX UBUNTU

Top comments (0)