Unit 3: CPU Scheduling




CPU Scheduling

CPU Scheduling is a fundamental concept of Operating Systems that deals with deciding which process will get the CPU next when multiple processes are ready to execute.

Scheduling Concept

Scheduling is the process of selecting one process from the ready queue and allocating the CPU to it.

Since the CPU is a limited resource, the operating system must share it efficiently among multiple processes.

Why CPU Scheduling is Required

  • Multiple processes compete for CPU
  • CPU must not remain idle
  • To improve system performance
  • To provide fairness among processes

Types of Scheduling

TypeDescription
Preemptive SchedulingCPU can be taken away from a running process
Non-Preemptive SchedulingProcess runs until it finishes or blocks

Performance Criteria (Scheduling Criteria)

Performance criteria are used to evaluate the effectiveness of a scheduling algorithm.

CriterionMeaningObjective
CPU UtilizationPercentage of time CPU is busyMaximize
ThroughputNumber of processes completed per unit timeMaximize
Turnaround TimeTime from submission to completionMinimize
Waiting TimeTime spent waiting in ready queueMinimize
Response TimeTime before first CPU responseMinimize
FairnessEqual CPU access to all processesEnsure fairness

Formula Table

TermFormula
Turnaround TimeCompletion Time − Arrival Time
Waiting TimeTurnaround Time − Burst Time
Response TimeFirst CPU Start − Arrival Time

Process States

A process goes through different states during its life cycle.

StateDescription
NewProcess is being created
ReadyProcess is ready and waiting for CPU
RunningProcess is currently executing
Waiting (Blocked)Process waiting for I/O or event
TerminatedProcess has finished execution

Process Transition Diagram

The process transition diagram shows movement of a process between states.

State Transitions

FromToReason
New → ReadyProcess admitted
Ready → RunningCPU allocated
Running → WaitingI/O request
Waiting → ReadyI/O completed
Running → TerminatedProcess finished
Running → ReadyPreemption (time slice over)

Textual Representation

New → Ready → Running → Terminated ↑ ↓ Waiting ← I/O

Schedulers

Schedulers are OS components that decide which process should move to which state.

Types of Schedulers

SchedulerFunctionSpeed
Long-Term SchedulerSelects processes from disk to memorySlow
Short-Term SchedulerSelects process from ready queue to CPUVery Fast
Medium-Term SchedulerSuspends or resumes processesMedium

Detailed Explanation

1. Long-Term Scheduler

  • Controls degree of multiprogramming
  • Decides how many processes are loaded into memory
  • Invoked less frequently

2. Short-Term Scheduler

  • Also called CPU Scheduler
  • Decides which ready process gets the CPU
  • Must be extremely fast

3. Medium-Term Scheduler

  • Used in time-sharing systems
  • Performs swapping
  • Removes processes temporarily from memory

Process Control Block (PCB)

A Process Control Block is a data structure used by the OS to store all information about a process.

Every process has one PCB.

Contents of PCB

FieldDescription
Process ID (PID)Unique process identifier
Process StateCurrent state (Ready, Running, etc.)
Program CounterAddress of next instruction
CPU RegistersRegister values
CPU Scheduling InfoPriority, queue pointers
Memory InfoBase and limit registers
Accounting InfoCPU usage, execution time
I/O Status InfoDevices allocated

Importance of PCB

  • Helps in context switching
  • Stores process execution status
  • Enables multitasking
  • Maintains process security and control

Context Switching (Related Concept)

When CPU switches from one process to another:

  • Current process state is saved in PCB
  • Next process state is loaded from PCB

This operation is called Context Switching.

Summary Table

TopicKey Point
CPU SchedulingDecides CPU allocation
Performance CriteriaMeasures scheduling efficiency
Process StatesLife cycle of a process
Process DiagramState transitions
SchedulersControl process movement
PCBStores process information

Process Address Space

Process Address Space is the logical memory space allocated to a process where its program and data reside during execution.

Each process has its own separate address space, ensuring memory protection.

Components of Process Address Space

SegmentDescription
Text (Code)Contains executable instructions
DataGlobal and static variables
HeapDynamic memory allocation
StackFunction calls, local variables
Memory-Mapped AreaShared libraries and files

Diagram (Textual)

High Address ---------------- Stack ---------------- Heap ---------------- Data ---------------- Text ---------------- Low Address

Advantages

  • Process isolation
  • Security
  • Efficient memory utilization
  • Supports multiprogramming

Process Identification Information

This information uniquely identifies and manages a process in the OS.

Process Identification Fields

FieldPurpose
PID (Process ID)Unique number for each process
PPID (Parent PID)ID of parent process
UID (User ID)Owner of the process
GID (Group ID)User group information
PriorityScheduling importance
Process StateCurrent execution state

Why Identification Is Important

  • Resource management
  • Security and access control
  • Process scheduling
  • Parent–child relationship tracking

Threads and Their Management

A thread is the smallest unit of execution within a process.

A process can have multiple threads sharing the same address space.

Components of a Thread

ComponentDescription
Thread ID
Program Counter
Register Set
Stack

Types of Threads

TypeDescription
User-Level ThreadsManaged by user library
Kernel-Level ThreadsManaged by OS kernel

User vs Kernel Threads

FeatureUser-LevelKernel-Level
ManagementUser spaceKernel
Context SwitchFastSlower
BlockingEntire process blocksOnly thread blocks
OS SupportNot requiredRequired

Thread Management Functions

  • Thread creation
  • Thread termination
  • Thread synchronization
  • Thread communication
  • Thread scheduling

Benefits of Multithreading

  • Faster execution
  • Better CPU utilization
  • Improved responsiveness
  • Resource sharing

Scheduling Algorithms

Scheduling algorithms decide which process or thread gets the CPU.

Common Scheduling Algorithms

AlgorithmTypeKey Feature
FCFSNon-preemptiveSimple
SJFBothMinimum average waiting time
PriorityBothBased on priority
Round RobinPreemptiveTime quantum
Multilevel QueueBothMultiple queues
Multilevel Feedback QueuePreemptiveDynamic priority

Comparison Table

AlgorithmStarvationPreemption
FCFSNoNo
SJFYesOptional
PriorityYesOptional
Round RobinNoYes

Multiprocessor Scheduling

Multiprocessor scheduling manages CPU allocation in systems with multiple processors.

Types of Multiprocessor Systems

TypeDescription
Asymmetric Multiprocessing (AMP)One master CPU
Symmetric Multiprocessing (SMP)All CPUs equal

Scheduling Approaches

ApproachDescription
Load SharingTasks shared among CPUs
Processor AffinityProcess stays on same CPU
Gang SchedulingThreads scheduled together

Challenges

  • Load balancing
  • Cache coherence
  • Synchronization overhead

Deadlock

System Model

A system consists of:

  • Processes
  • Resources (CPU, memory, I/O devices)

Resources can be:

TypeExample
ReusableCPU, Printer
ConsumableSignals, Messages

Deadlock Characterization

Deadlock is a situation where each process waits indefinitely for resources held by others.

Necessary Conditions (Coffman Conditions)

ConditionMeaning
Mutual ExclusionOne resource at a time
Hold and WaitHold one, wait for others
No PreemptionResources not forcibly taken
Circular WaitCircular chain of waiting

All four must exist for deadlock to occur.

Deadlock Prevention

Prevention ensures at least one condition never holds.

Condition BrokenMethod
Mutual ExclusionMake resources sharable
Hold and WaitRequest all resources at once
No PreemptionForce resource release
Circular WaitOrder resource allocation

Deadlock Avoidance

Avoidance ensures the system never enters an unsafe state.

Key Concept

  • Safe state
  • Unsafe state

Banker’s Algorithm

  • Used for deadlock avoidance
  • Requires maximum resource needs in advance
  • Checks system safety before allocation

Deadlock Detection

Detection allows deadlock to occur and then detects it.

Methods

System TypeMethod
Single instanceWait-for graph
Multiple instancesDetection algorithm

Recovery from Deadlock

Once detected, system recovers using:

Recovery Methods

MethodDescription
Process TerminationAbort one or more processes
Resource PreemptionTake resources forcibly
RollbackRestore to safe state

Factors in Recovery

  • Process priority
  • Resource usage
  • Cost of rollback

Quick Revision Table

TopicKey Focus
Process Address SpaceLogical memory layout
ThreadsLightweight execution units
SchedulingCPU allocation
Multiprocessor SchedulingMulti-CPU systems
DeadlockResource waiting problem