0% found this document useful (0 votes)
17 views32 pages

Unit No 3 OS Notes

Uploaded by

vina.varade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views32 pages

Unit No 3 OS Notes

Uploaded by

vina.varade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Unit No.

3 Interprocess Communication & Deadlock What is


Concurrency?
Concurrency refers to the ability of an operating system to allow multiple tasks (or processes/threads) to
make progress at the same time, either truly in parallel (on multi-core CPUs) or through interleaved
execution on a single-core CPU.

It does not necessarily mean they run simultaneously, but that they are making progress independently.

+ +

| Concurrency |

+ +

/ \

/ \

/ \

+ + + +

| Process 1 | | Process 2 |

+ + + +

| |

+ +

| Shared Resources |

+ +

Why is Concurrency Important?

 Efficient CPU utilization (e.g., while one process waits for I/O, another can use the CPU).

 Responsiveness (especially in GUI apps or servers handling multiple clients).

 Scalability (multi-core systems benefit from true parallelism).

1
Concept Description

Process An independent program with its own memory space.

A lightweight process; multiple threads can exist in a single process, sharing memory.
Thread

Context The act of saving and loading thread/process states when switching between them.
Switch

Critical A part of code that accesses shared resources and must not be executed by more than one
Section thread at a time

Problems in Concurrency

1. Race Condition – Two threads access shared data at the same time, leading to unpredictable
results.

2. Deadlock – Two or more processes wait forever for resources held by each other.

3. Starvation – A process/thread waits indefinitely while others are continuously


prioritized.

4. Livelock – Similar to deadlock, but the processes keep changing state and fail to progress.

Concurrency Control Mechanisms

Mechanism Description

Mutex (Mutual
Exclusion) Ensures only one thread can access critical section.

Semaphore Controls access based on available permits.

Monitors High-level abstraction for managing shared resources.

Mechanism Description

Used for signaling between threads when certain conditions are


Condition Variables

2
met.

Aspect Concurrency Parallelism

Definition Multiple tasks progress independently Tasks execute simultaneously

Requirement Single or Multi-core CPU Requires Multi-core CPU

Goal Structure and responsiveness Speed (performance)

What is the Critical Section?

The Critical Section is a part of a program where the process accesses shared resources (e.g., variables,
files, memory). If multiple processes or threads enter the critical section simultaneously, it may lead to:

 Race conditions

 Data inconsistency

 System crashes

Critical Section Diagram:


+ +
| Entry Section |
| (Waits for access to CS)|
+ + +
|
v
+ +
| Critical Section |

3
| (Access shared resource)|
+ + +
|
v
+ +
| Exit Section |
| (Releases access to CS) |
+ + +
|
v
+ +
| Remainder Section|
| (Performs other tasks) |
+ +

Goal

To design a solution that ensures Mutual Exclusion, Progress, and Bounded Waiting.

Requirements for a correct solution:

1. Mutual Exclusion: Only one process can enter the critical section at a time.

2. Progress: If no process is in the critical section, one of the waiting processes must be allowed to
enter.

3. Bounded Waiting: A process should not wait forever to enter the critical section.

Synchronization Primitives to Solve the Problem

4
+------------------------------------------+

| Concurrency Problem |

| (Race Conditions, Deadlocks, etc.) |

+----------------+--------------------------+

+-----------------------------------+

| Apply Synchronization Primitives |

+----------------+----------------------------+

+---------+-----------+------------+------------------+

| | | |

v v v v

+-----------+ +------------+ +------------+ +--------------------+

| Mutex | | Semaphore | | Monitor | | Barrier |

+-----------+ +------------+ +

1. Semaphores
A Semaphore is a variable used to control access to a common resource.

Two Types:

 Binary Semaphore (0 or 1): Like a mutex

 Counting Semaphore: Keeps track of resource count Semaphores


Diagram:

5
Operations:

wait(S): // Also called P(S) while (S


<= 0); // busy wait S--;

signal(S): // Also called V(S) S++;


Example (Binary Semaphore):

semaphore S = 1;

process P1: wait(S);


// critical section signal(S);

6
process P2:

wait(S);

// critical section
signal(S);
Mutex (Mutual Exclusion Lock)

A Mutex is used to lock critical sections so only one thread can access the resource at a time.

A Mutex (Mutual Exclusion) is a synchronization primitive that allows only one thread or process to
access a shared resource or execute a critical section of code at a time. It functions by granting a "lock" to a
thread; any other threads attempting to acquire the same lock must wait until the first thread releases it.
This prevents data races and ensures data integrity in concurrent programming environments

Operations:

[Link](); // enters critical section

// critical section

[Link](); // exits critical section

Example in C (pthreads):
pthread_mutex_t lock;

pthread_mutex_lock(&lock);

// critical section
pthread_mutex_unlock(&lock);

7
Fig: Mutex (Mutual Exclusion Lock)

Mutex ensures:

 Only one thread enters the critical section.

 Others are blocked until it's released.

3. Monitor
A Monitor is a high-level abstraction that combines:

8
 Data (shared resource)

 Procedures (to access it)

 Synchronization (handled internally)

Monitor in OS" refers to using three external display monitors with an operating system like Windows or
macOS to enhance productivity and workflow, enabling users to multitask with multiple applications
simultaneously. Alternatively, the phrase can refer to monitors as a synchronization primitive in operating
systems, which is a software-based mechanism to control access to shared resources among multiple
processes, ensuring data integrity and preventing deadlocks

Only one process can be active in the monitor at a time

Feature Semaphore Mutex Monitor

Type Low-level Low-level High-level abstraction

Ownership No Yes Yes

Handles Signaling? Yes No Yes

Language Support C, C++, Java, etc. C, C++, Java, etc. Java, C#, Python, etc.

To solve the Critical Section Problem, synchronization primitives like


Semaphores, Mutexes, and Monitors are essential.

 Use mutex for simple locking.

 Use semaphores for more flexible resource control.

 Use monitors for clean, high-level abstraction and safer concurrency control

 Synchronization Problems

These problems model real-world situations where multiple threads or processes need to coordinate
access to shared resources without causing race conditions, deadlock, or starvation.

Producer-Consumer Problem (Bounded Buffer Problem) Problem Statement:

 Producer generates data and puts it into a shared buffer.

 Consumer takes data from the buffer.

 Need to prevent:

Producer-Consumer Problem (Bounded Buffer Problem) Diagram:

9
1
0
o Producer from adding when buffer is full.

o Consumer from removing when buffer is empty.

o Simultaneous access to buffer (critical section).


Solution Using Semaphores:

semaphore empty = N; // N = buffer size semaphore full


= 0;
semaphore mutex = 1;

Producer: while
(true) {
produce_item();
wait(empty);
wait(mutex);
insert_item();
signal(mutex);
signal(full);

Consumer: while
(true) {
wait(full); wait(mutex);
remove_item();
signal(mutex);
signal(empty);
consume_item();

10
Ensures:

 Mutual exclusion (with mutex)

 Buffer not overfilled (empty)

 Buffer not read when empty (full)

Reader-Writer Problem Problem


Statement:

 Readers can read a shared resource simultaneously.

 Writers need exclusive access (no readers or other writers).

 Goal: Allow multiple readers but only one writer at a time. Reader-Writer
Problem Diagram

Classic Semaphores Solution:


semaphore mutex = 1; semaphore wrt
= 1;

11
int readcount = 0;
Reader:
wait(mutex);
readcount++;
if (readcount == 1)

wait(wrt); // first reader blocks writers


signal(mutex);

read_data();
wait(mutex);
readcount--;
if (readcount == 0)

signal(wrt); // last reader allows writer


signal(mutex);

Writer:

wait(wrt); // exclusive access


write_data();
signal(wrt); Prevents:
 Writers writing while readers are reading.

 Multiple writers accessing simultaneously.

Variants:

 Reader-priority (risk of writer starvation)

 Writer-priority (solves starvation but more complex

12
Dining Philosophers Problem Problem
Statement:

 Five philosophers sit around a table.

 Each needs two forks (left and right) to eat.

 Forks are shared (one between each pair).

 Problem: Avoid deadlock, starvation

 Reader-Writer Problem Diagrsam

+ +

| Shared Resource |

| (e.g., Database) |

+ + +

+ | +

| | |

+ v+ + v + + v +

| Reader 1 | | Reader 2 | ... | Reader N |

+ + + + + +

| | |

+ + + + +

Read access allowed


concurrently to multiple
readers only

13
^

| Exclusive
Access
|

+ v +

| Writer |

+ +

|
Writes (modifies) the shared data
— must be exclusive, no readers/writers Naive
Semaphore Approach (Leads to Deadlock!):
semaphore forks[5] = {1, 1, 1, 1, 1};
Philosopher i:
wait(forks[i]); // pick up left fork
wait(forks[(i+1)%5]); // pick up right fork eat();
signal(forks[i]);
signal(forks[(i+1)%5]);
Solution to Avoid Deadlock:

 Allow only 4 philosophers to try to pick forks at once.

 Or use numbered ordering, or allow philosopher to pick both forks at once with a mutex or
monitor.

semaphore mutex = 1;

14
semaphore forks[5]; Philosopher i:
wait(mutex); // lock before picking forks wait(forks[i]);
wait(forks[(i+1)%5]);

signal(mutex); // unlock so others can try eat();


signal(forks[i]);
signal(forks[(i+1)%5]);

Problem Goal Tools Used Key Challenge

Producer- Balance production & Semaphores, mutex Buffer overflow/underflow


Consumer consumption

Concurrent reads, Semaphores, counters Starvation, mutual


Reader-Writer exclusive writes exclusion

Dining Avoid deadlock with Semaphores, mutex,


Deadlock, starvation
Philosophers shared forks monitors

Inter-Process Communication (IPC) What is IPC?

IPC refers to the mechanisms provided by the operating system to allow processes to communicate and
synchronize with each other, especially when they run independently or in different memory spaces. Processes need
to communicate with each other in many situations. Inter-Process Communication or IPC is a mechanism that
allows processes to communicate.

 It helps processes synchronize their activities, share information and avoid conflicts while accessing shared
resources.

 There are two method of IPC, shared memory and message passing. An operating system can implement both
methods of communication.

Inter-Process Communication (IPC) Diagram:

15
Why IPC?

 Share data between processes

 Coordinate tasks (synchronization)

 Improve modularity (split tasks into separate processes)

 Allow concurrent processing


Message Passing :
Message Passing (Inter-Process Communication)

Message Passing is a method where processes communicate by sending and receiving messages
through a communication channel. This is often used when processes do not share memory and need to
exchange data safely.

Key Characteristics

 Communication happens via explicit messages (data packets).


 Processes send messages and others receive them.
 Can be synchronous (blocking) or asynchronous (non-blocking).
 Often used in distributed systems or when memory sharing is not possible.

16
Shared Memory Deadlocks (in OS)

A deadlock occurs when two or more processes are waiting indefinitely for resources held by each other,
creating a cycle of dependencies that halts progress.

In the context of shared memory, deadlocks can happen when multiple processes try to access and lock
shared memory segments or related synchronization primitives (like semaphores or mutexes) in
conflicting orders.

Conditions for Deadlock (Coffman Conditions)

Deadlock can occur if all these conditions hold simultaneously:

1. Mutual Exclusion:
At least one resource (e.g., a shared memory segment or a lock) is held in a non-shareable mode.
2. Hold and Wait:
Processes hold resources while waiting for additional ones.
3. No Preemption:
Resources cannot be forcibly taken away from a process.
4. Circular Wait:
A circular chain of processes exists where each process waits for a resource held by the next.

Deadlock Prevention

Break one or more of the Coffman conditions to prevent deadlock:

 Mutual Exclusion:
For shared memory, mutual exclusion is often necessary, but for some resources, use sharable modes
if possible.
 Hold and Wait:
Require processes to request all needed resources at once, so they don't hold some while waiting for
others.
 No Preemption:
Allow preemption, i.e., forcibly take resources from processes when needed.
 Circular Wait:
Impose a strict ordering on resource acquisition (e.g., always request shared memory segments in a
predefined order).

17
Deadlock Avoidance (Banker’s Algorithm)

 Banker’s Algorithm models resources and processes to check if resource allocation leads to
a safe state.
 Before granting access to shared memory or locks, the OS simulates if granting will keep
the system safe.
 If granting causes an unsafe state, the process waits. Deadlock

Detection

 OS can periodically check for deadlocks by building a Resource Allocation Graph (RAG).
 If the graph has a cycle, deadlock exists.
 In shared memory, this means checking which processes hold locks and which are waiting.

Deadlock Recovery

If deadlock is detected:

 Process Termination:
Abort one or more processes to break the cycle.
 Resource Preemption:
Take resources away from some processes and allocate them to others.

Aspect Description
Conditions Mutual exclusion, hold & wait, no preemption, circular wait
Prevention Break any condition (ordering resources, no hold & wait)
Avoidance Use Banker’s Algorithm to allow only safe resource allocation
Detection Use Resource Allocation Graph to find cycles
Recovery Abort processes or preempt resources

Type Description Example


Sender waits until receiver gets the message before Phone call
Synchronous
continuing. conversation.
Sender sends the message and continues without
waiting. Receiver checks and Sending an email or text
Asynchronous message.
processes messages when ready.

18
IPC
Description
Mechanism

Unidirectional/bidirectional communication between related processes


Pipes

Message
Queues Messages sent between processes, managed by the OS

Shared
Memory area shared between processes
Memory

Semaphores Used for signaling and synchronization

Communication between processes over a network or the same system


Sockets

Example (in C): int


pipefd[2];
pipe(pipefd); fork();

19
Simple, fast
Only between related processes, limited flexibility

Message Queues

 Processes send and receive structured messages.

 Managed by the OS kernel.

 More flexible than pipes.

Message Queue Diagram :

Async communication
More overhead than shared memory

3. Shared Memory

 Processes map a shared memory region into their address space.

 Fastest IPC (no OS mediation during use).

 Needs synchronization (e.g., semaphores) to prevent race conditions.

Shared Memory diagram:

20
Use Case:

 One process writes, another reads.

 Needs protection to avoid inconsistency. High-

speed
Needs additional mechanisms for sync

4. Semaphores (as IPC)


 Often used with shared memory.

 Used to synchronize access to shared resources. Prevent race

conditions
Low-level, error-prone

5. Sockets

 Used for network communication or local inter-process communication.

 UNIX domain sockets for local; TCP/IP sockets for remote. Supports

unrelated processes, even across machines


Complex setup

Sockets Diagram:

21
+ + + +

| Process A | | Process B |

| (Client or | | (Server or |

| Peer) | | Peer) |

+ + + + + +

| |

| + + |

+--------->| Network Socket API |<---------+

+ +

+ +

| Network Layer / OS |

| (Handles data transfer)|

+ +

(Local or Remote Communication)

6. Signals

 Used to send asynchronous notifications to a process.

 Example: SIGINT, SIGTERM, SIGKILL

Lightweight
Very limited data can be transferred

Example: Shared Memory + Semaphore (C-style Pseudocode)

// Writer

shmid = shmget(IPC_KEY, size, IPC_CREAT | 0666);

22
ptr = shmat(shmid, NULL, 0); sem_wait(sem);
strcpy(ptr, "Hello"); sem_signal(sem);

// Reader

shmid = shmget(IPC_KEY, size, 0666); ptr =


shmat(shmid, NULL, 0); sem_wait(sem);
printf("%s", ptr); sem_signal(sem);

Mechanism Speed Complexity Communication Suitable For

Pipes Medium Low One-way Related processes

Message Unrelated/related processes


Queues Medium Medium Two-way

High-speed data exchange


Shared Memory High High Shared space

Semaphores N/A Medium Signaling Sync between processes

Sockets Medium High Two-way Local or remote processes

Signals Low Low One-way Notifications

Real-Life Examples

 Web server handling multiple clients using shared memory + semaphores

23
 Shells using pipes to connect commands (ls | grep foo)

 Chat apps using sockets for communication

 Database systems using shared memory + semaphores for caching

Inter-Process Communication (IPC) – OS Concept What is IPC?

Inter-Process Communication (IPC) refers to a set of mechanisms that allow


processes to communicate and share data with each other, either within the same computer or over a network.

IPC is essential when multiple processes need to coordinate or exchange information, especially
in concurrent or parallel systems

Why IPC?

 To share data (e.g., between producer and consumer)

 To coordinate actions (e.g., access to shared files)

 To divide work among multiple processes

 To support modular program design

IPC Related/Unrelated
Description Direction
Mechanism Processes

Stream of bytes; one-way or two-way


communication Unidirectional /
Pipes Related
Bidirectional

Like pipes, but with a name and can


Named Pipes
be used by unrelated processesBidirectional Unrelated
(FIFOs)

Message Messages sent through the kernel


Two-way Both
Queues between processes
IPC Related/Unrelated
Description Direction
Mechanism Processes

Shared region of memory


Shared accessible by multiple processes N/A (direct access) Both
Memory

24
Used for process synchronization
Semaphores (e.g., locking shared memory) N/A (signaling) Both

Communication over a network


Sockets or between processes Two-way Both

Asynchronous notifications sent


Signals to One-way Both
processes

Shared memory deadlocks: conditions,prevention,avoidance(banker's


Algorithm),detection,recovery

Shared Memory Deadlocks

Deadlock occurs in shared memory systems when two or more processes are each waiting for resources
held by the other(s), and none can proceed. This causes the system to freeze because these processes
wait indefinitely.

25
1. Conditions for Deadlock
For a deadlock to occur, all four of these conditions must hold simultaneously:

1. Mutual Exclusion
At least one resource must be held in a non-sharable mode (only one process can use it at a time).
2. Hold and Wait
A process holding at least one resource is waiting to acquire additional resources held by other
processes.
3. No Preemption
Resources cannot be forcibly taken away from a process; they must be released voluntarily.
4. Circular Wait
A circular chain of processes exists where each process holds a resource that the next process in
the chain is waiting for.

26
2. Deadlock Prevention
Deadlock prevention aims to ensure at least one of the above conditions cannot hold, thereby
preventing deadlocks from happening.

 Mutual Exclusion:
Not always possible to eliminate since some resources are inherently non- sharable.
 Hold and Wait:
Prevented by requiring processes to request all needed resources at once before execution or to
release held resources before requesting new ones.
 No Preemption:
Allow preemption by forcibly taking resources away from some processes when needed.
 Circular Wait:
Impose an ordering on resource types and require processes to request resources in ascending
order.

3. Deadlock Avoidance
Deadlock avoidance dynamically examines resource allocation requests to ensure that granting the
request keeps the system in a safe state (i.e., no deadlock possible).

 The system requires knowledge of maximum resource needs of each process upfront.
 Before allocating resources, the system simulates the allocation and checks if the system
remains safe.
 If safe, allocation proceeds; if not, the process must wait.

4. Banker's Algorithm (Deadlock Avoidance Example)


The Banker's Algorithm is a classic deadlock avoidance method:

 Imagine the system as a banker who lends money (resources) to customers (processes).
 The banker knows the maximum loan (maximum resource requirement) each customer
might request.
 The banker only grants a loan if after granting it, the system can still satisfy the maximum
needs of all other customers.

27
28
Key components:

 Available: Number of available resources of each type.


 Max: Maximum demand of each process.
 Allocation: Currently allocated resources to each process.
 Need: Remaining resource need for each process = Max - Allocation.

Algorithm steps:

1. When a process requests resources, check if request ≤ Need and request ≤ Available.
2. Pretend to allocate requested resources and check if the system remains in a safe state.
3. A safe state means there exists a sequence of processes where each can finish with currently
available resources plus resources freed by previously finished processes.
4. If safe, allocate resources; else, the process must wait.
5. Deadlock Detection

If prevention and avoidance are not used, the system may enter deadlock. Detection algorithms
run periodically to check for deadlock.

 Resource Allocation Graph (RAG):

29
o If a cycle is detected in RAG for single instances per resource, deadlock exists.
 Wait-for Graph:
o Derived from RAG by collapsing resource nodes; a cycle means deadlock.
 For multiple instances of resources, a detection algorithm similar to Banker's Algorithm is used
to detect unsafe states.

6. Deadlock Recovery

Once detected, recovery methods include:

 Process Termination:
o Abort one or more processes involved in deadlock until the deadlock cycle is broken.
 Resource Preemption:
o Temporarily take resources away from some processes and give them to others to break
the deadlock.
 Rollback:
o Roll back processes to safe states and restart them.

Aspect Description
Conditions Mutual exclusion, Hold & wait, No preemption, Circular wait
Prevention Break one condition: avoid hold & wait, allow preemption, etc.
Avoidance Resource allocation based on safe state checks (Banker’s Algorithm)
Detection Detect cycles in wait-for graph or use resource allocation matrices
Recovery Terminate processes, preempt resources, rollback processes

21
0
30

You might also like