Unit No 3 OS Notes
Unit No 3 OS Notes
It does not necessarily mean they run simultaneously, but that they are making progress independently.
+ +
| Concurrency |
+ +
/ \
/ \
/ \
+ + + +
| Process 1 | | Process 2 |
+ + + +
| |
+ +
| Shared Resources |
+ +
Efficient CPU utilization (e.g., while one process waits for I/O, another can use the CPU).
1
Concept Description
A lightweight process; multiple threads can exist in a single process, sharing memory.
Thread
Context The act of saving and loading thread/process states when switching between them.
Switch
Critical A part of code that accesses shared resources and must not be executed by more than one
Section thread at a time
Problems in Concurrency
1. Race Condition – Two threads access shared data at the same time, leading to unpredictable
results.
2. Deadlock – Two or more processes wait forever for resources held by each other.
4. Livelock – Similar to deadlock, but the processes keep changing state and fail to progress.
Mechanism Description
Mutex (Mutual
Exclusion) Ensures only one thread can access critical section.
Mechanism Description
2
met.
The Critical Section is a part of a program where the process accesses shared resources (e.g., variables,
files, memory). If multiple processes or threads enter the critical section simultaneously, it may lead to:
Race conditions
Data inconsistency
System crashes
3
| (Access shared resource)|
+ + +
|
v
+ +
| Exit Section |
| (Releases access to CS) |
+ + +
|
v
+ +
| Remainder Section|
| (Performs other tasks) |
+ +
Goal
To design a solution that ensures Mutual Exclusion, Progress, and Bounded Waiting.
1. Mutual Exclusion: Only one process can enter the critical section at a time.
2. Progress: If no process is in the critical section, one of the waiting processes must be allowed to
enter.
3. Bounded Waiting: A process should not wait forever to enter the critical section.
4
+------------------------------------------+
| Concurrency Problem |
+----------------+--------------------------+
+-----------------------------------+
+----------------+----------------------------+
+---------+-----------+------------+------------------+
| | | |
v v v v
+-----------+ +------------+ +
1. Semaphores
A Semaphore is a variable used to control access to a common resource.
Two Types:
5
Operations:
semaphore S = 1;
6
process P2:
wait(S);
// critical section
signal(S);
Mutex (Mutual Exclusion Lock)
A Mutex is used to lock critical sections so only one thread can access the resource at a time.
A Mutex (Mutual Exclusion) is a synchronization primitive that allows only one thread or process to
access a shared resource or execute a critical section of code at a time. It functions by granting a "lock" to a
thread; any other threads attempting to acquire the same lock must wait until the first thread releases it.
This prevents data races and ensures data integrity in concurrent programming environments
Operations:
// critical section
Example in C (pthreads):
pthread_mutex_t lock;
pthread_mutex_lock(&lock);
// critical section
pthread_mutex_unlock(&lock);
7
Fig: Mutex (Mutual Exclusion Lock)
Mutex ensures:
3. Monitor
A Monitor is a high-level abstraction that combines:
8
Data (shared resource)
Monitor in OS" refers to using three external display monitors with an operating system like Windows or
macOS to enhance productivity and workflow, enabling users to multitask with multiple applications
simultaneously. Alternatively, the phrase can refer to monitors as a synchronization primitive in operating
systems, which is a software-based mechanism to control access to shared resources among multiple
processes, ensuring data integrity and preventing deadlocks
Language Support C, C++, Java, etc. C, C++, Java, etc. Java, C#, Python, etc.
Use monitors for clean, high-level abstraction and safer concurrency control
Synchronization Problems
These problems model real-world situations where multiple threads or processes need to coordinate
access to shared resources without causing race conditions, deadlock, or starvation.
Need to prevent:
9
1
0
o Producer from adding when buffer is full.
Producer: while
(true) {
produce_item();
wait(empty);
wait(mutex);
insert_item();
signal(mutex);
signal(full);
Consumer: while
(true) {
wait(full); wait(mutex);
remove_item();
signal(mutex);
signal(empty);
consume_item();
10
Ensures:
Goal: Allow multiple readers but only one writer at a time. Reader-Writer
Problem Diagram
11
int readcount = 0;
Reader:
wait(mutex);
readcount++;
if (readcount == 1)
read_data();
wait(mutex);
readcount--;
if (readcount == 0)
Writer:
Variants:
12
Dining Philosophers Problem Problem
Statement:
+ +
| Shared Resource |
| (e.g., Database) |
+ + +
+ | +
| | |
+ v+ + v + + v +
+ + + + + +
| | |
+ + + + +
13
^
| Exclusive
Access
|
+ v +
| Writer |
+ +
|
Writes (modifies) the shared data
— must be exclusive, no readers/writers Naive
Semaphore Approach (Leads to Deadlock!):
semaphore forks[5] = {1, 1, 1, 1, 1};
Philosopher i:
wait(forks[i]); // pick up left fork
wait(forks[(i+1)%5]); // pick up right fork eat();
signal(forks[i]);
signal(forks[(i+1)%5]);
Solution to Avoid Deadlock:
Or use numbered ordering, or allow philosopher to pick both forks at once with a mutex or
monitor.
semaphore mutex = 1;
14
semaphore forks[5]; Philosopher i:
wait(mutex); // lock before picking forks wait(forks[i]);
wait(forks[(i+1)%5]);
IPC refers to the mechanisms provided by the operating system to allow processes to communicate and
synchronize with each other, especially when they run independently or in different memory spaces. Processes need
to communicate with each other in many situations. Inter-Process Communication or IPC is a mechanism that
allows processes to communicate.
It helps processes synchronize their activities, share information and avoid conflicts while accessing shared
resources.
There are two method of IPC, shared memory and message passing. An operating system can implement both
methods of communication.
15
Why IPC?
Message Passing is a method where processes communicate by sending and receiving messages
through a communication channel. This is often used when processes do not share memory and need to
exchange data safely.
Key Characteristics
16
Shared Memory Deadlocks (in OS)
A deadlock occurs when two or more processes are waiting indefinitely for resources held by each other,
creating a cycle of dependencies that halts progress.
In the context of shared memory, deadlocks can happen when multiple processes try to access and lock
shared memory segments or related synchronization primitives (like semaphores or mutexes) in
conflicting orders.
1. Mutual Exclusion:
At least one resource (e.g., a shared memory segment or a lock) is held in a non-shareable mode.
2. Hold and Wait:
Processes hold resources while waiting for additional ones.
3. No Preemption:
Resources cannot be forcibly taken away from a process.
4. Circular Wait:
A circular chain of processes exists where each process waits for a resource held by the next.
Deadlock Prevention
Mutual Exclusion:
For shared memory, mutual exclusion is often necessary, but for some resources, use sharable modes
if possible.
Hold and Wait:
Require processes to request all needed resources at once, so they don't hold some while waiting for
others.
No Preemption:
Allow preemption, i.e., forcibly take resources from processes when needed.
Circular Wait:
Impose a strict ordering on resource acquisition (e.g., always request shared memory segments in a
predefined order).
17
Deadlock Avoidance (Banker’s Algorithm)
Banker’s Algorithm models resources and processes to check if resource allocation leads to
a safe state.
Before granting access to shared memory or locks, the OS simulates if granting will keep
the system safe.
If granting causes an unsafe state, the process waits. Deadlock
Detection
OS can periodically check for deadlocks by building a Resource Allocation Graph (RAG).
If the graph has a cycle, deadlock exists.
In shared memory, this means checking which processes hold locks and which are waiting.
Deadlock Recovery
If deadlock is detected:
Process Termination:
Abort one or more processes to break the cycle.
Resource Preemption:
Take resources away from some processes and allocate them to others.
Aspect Description
Conditions Mutual exclusion, hold & wait, no preemption, circular wait
Prevention Break any condition (ordering resources, no hold & wait)
Avoidance Use Banker’s Algorithm to allow only safe resource allocation
Detection Use Resource Allocation Graph to find cycles
Recovery Abort processes or preempt resources
18
IPC
Description
Mechanism
Message
Queues Messages sent between processes, managed by the OS
Shared
Memory area shared between processes
Memory
19
Simple, fast
Only between related processes, limited flexibility
Message Queues
Async communication
More overhead than shared memory
3. Shared Memory
20
Use Case:
speed
Needs additional mechanisms for sync
conditions
Low-level, error-prone
5. Sockets
UNIX domain sockets for local; TCP/IP sockets for remote. Supports
Sockets Diagram:
21
+ + + +
| Process A | | Process B |
| (Client or | | (Server or |
| Peer) | | Peer) |
+ + + + + +
| |
| + + |
+ +
+ +
| Network Layer / OS |
+ +
6. Signals
Lightweight
Very limited data can be transferred
// Writer
22
ptr = shmat(shmid, NULL, 0); sem_wait(sem);
strcpy(ptr, "Hello"); sem_signal(sem);
// Reader
Real-Life Examples
23
Shells using pipes to connect commands (ls | grep foo)
IPC is essential when multiple processes need to coordinate or exchange information, especially
in concurrent or parallel systems
Why IPC?
IPC Related/Unrelated
Description Direction
Mechanism Processes
24
Used for process synchronization
Semaphores (e.g., locking shared memory) N/A (signaling) Both
Deadlock occurs in shared memory systems when two or more processes are each waiting for resources
held by the other(s), and none can proceed. This causes the system to freeze because these processes
wait indefinitely.
25
1. Conditions for Deadlock
For a deadlock to occur, all four of these conditions must hold simultaneously:
1. Mutual Exclusion
At least one resource must be held in a non-sharable mode (only one process can use it at a time).
2. Hold and Wait
A process holding at least one resource is waiting to acquire additional resources held by other
processes.
3. No Preemption
Resources cannot be forcibly taken away from a process; they must be released voluntarily.
4. Circular Wait
A circular chain of processes exists where each process holds a resource that the next process in
the chain is waiting for.
26
2. Deadlock Prevention
Deadlock prevention aims to ensure at least one of the above conditions cannot hold, thereby
preventing deadlocks from happening.
Mutual Exclusion:
Not always possible to eliminate since some resources are inherently non- sharable.
Hold and Wait:
Prevented by requiring processes to request all needed resources at once before execution or to
release held resources before requesting new ones.
No Preemption:
Allow preemption by forcibly taking resources away from some processes when needed.
Circular Wait:
Impose an ordering on resource types and require processes to request resources in ascending
order.
3. Deadlock Avoidance
Deadlock avoidance dynamically examines resource allocation requests to ensure that granting the
request keeps the system in a safe state (i.e., no deadlock possible).
The system requires knowledge of maximum resource needs of each process upfront.
Before allocating resources, the system simulates the allocation and checks if the system
remains safe.
If safe, allocation proceeds; if not, the process must wait.
Imagine the system as a banker who lends money (resources) to customers (processes).
The banker knows the maximum loan (maximum resource requirement) each customer
might request.
The banker only grants a loan if after granting it, the system can still satisfy the maximum
needs of all other customers.
27
28
Key components:
Algorithm steps:
1. When a process requests resources, check if request ≤ Need and request ≤ Available.
2. Pretend to allocate requested resources and check if the system remains in a safe state.
3. A safe state means there exists a sequence of processes where each can finish with currently
available resources plus resources freed by previously finished processes.
4. If safe, allocate resources; else, the process must wait.
5. Deadlock Detection
If prevention and avoidance are not used, the system may enter deadlock. Detection algorithms
run periodically to check for deadlock.
29
o If a cycle is detected in RAG for single instances per resource, deadlock exists.
Wait-for Graph:
o Derived from RAG by collapsing resource nodes; a cycle means deadlock.
For multiple instances of resources, a detection algorithm similar to Banker's Algorithm is used
to detect unsafe states.
6. Deadlock Recovery
Process Termination:
o Abort one or more processes involved in deadlock until the deadlock cycle is broken.
Resource Preemption:
o Temporarily take resources away from some processes and give them to others to break
the deadlock.
Rollback:
o Roll back processes to safe states and restart them.
Aspect Description
Conditions Mutual exclusion, Hold & wait, No preemption, Circular wait
Prevention Break one condition: avoid hold & wait, allow preemption, etc.
Avoidance Resource allocation based on safe state checks (Banker’s Algorithm)
Detection Detect cycles in wait-for graph or use resource allocation matrices
Recovery Terminate processes, preempt resources, rollback processes
21
0
30