Unit 3: CPU Scheduling
CPU Scheduling
CPU Scheduling is a fundamental concept of Operating Systems that deals with deciding which process will get the CPU next when multiple processes are ready to execute.
Scheduling Concept
Scheduling is the process of selecting one process from the ready queue and allocating the CPU to it.
Since the CPU is a limited resource, the operating system must share it efficiently among multiple processes.
Why CPU Scheduling is Required
- Multiple processes compete for CPU
- CPU must not remain idle
- To improve system performance
- To provide fairness among processes
Types of Scheduling
| Type | Description |
|---|---|
| Preemptive Scheduling | CPU can be taken away from a running process |
| Non-Preemptive Scheduling | Process runs until it finishes or blocks |
Performance Criteria (Scheduling Criteria)
Performance criteria are used to evaluate the effectiveness of a scheduling algorithm.
| Criterion | Meaning | Objective |
|---|---|---|
| CPU Utilization | Percentage of time CPU is busy | Maximize |
| Throughput | Number of processes completed per unit time | Maximize |
| Turnaround Time | Time from submission to completion | Minimize |
| Waiting Time | Time spent waiting in ready queue | Minimize |
| Response Time | Time before first CPU response | Minimize |
| Fairness | Equal CPU access to all processes | Ensure fairness |
Formula Table
| Term | Formula |
|---|---|
| Turnaround Time | Completion Time − Arrival Time |
| Waiting Time | Turnaround Time − Burst Time |
| Response Time | First CPU Start − Arrival Time |
Process States
A process goes through different states during its life cycle.
| State | Description |
|---|---|
| New | Process is being created |
| Ready | Process is ready and waiting for CPU |
| Running | Process is currently executing |
| Waiting (Blocked) | Process waiting for I/O or event |
| Terminated | Process has finished execution |
Process Transition Diagram
The process transition diagram shows movement of a process between states.
State Transitions
| From | To | Reason |
|---|---|---|
| New → Ready | Process admitted | |
| Ready → Running | CPU allocated | |
| Running → Waiting | I/O request | |
| Waiting → Ready | I/O completed | |
| Running → Terminated | Process finished | |
| Running → Ready | Preemption (time slice over) |
Textual Representation
Schedulers
Schedulers are OS components that decide which process should move to which state.
Types of Schedulers
| Scheduler | Function | Speed |
|---|---|---|
| Long-Term Scheduler | Selects processes from disk to memory | Slow |
| Short-Term Scheduler | Selects process from ready queue to CPU | Very Fast |
| Medium-Term Scheduler | Suspends or resumes processes | Medium |
Detailed Explanation
1. Long-Term Scheduler
- Controls degree of multiprogramming
- Decides how many processes are loaded into memory
- Invoked less frequently
2. Short-Term Scheduler
- Also called CPU Scheduler
- Decides which ready process gets the CPU
- Must be extremely fast
3. Medium-Term Scheduler
- Used in time-sharing systems
- Performs swapping
- Removes processes temporarily from memory
Process Control Block (PCB)
A Process Control Block is a data structure used by the OS to store all information about a process.
Every process has one PCB.
Contents of PCB
| Field | Description |
|---|---|
| Process ID (PID) | Unique process identifier |
| Process State | Current state (Ready, Running, etc.) |
| Program Counter | Address of next instruction |
| CPU Registers | Register values |
| CPU Scheduling Info | Priority, queue pointers |
| Memory Info | Base and limit registers |
| Accounting Info | CPU usage, execution time |
| I/O Status Info | Devices allocated |
Importance of PCB
- Helps in context switching
- Stores process execution status
- Enables multitasking
- Maintains process security and control
Context Switching (Related Concept)
When CPU switches from one process to another:
- Current process state is saved in PCB
- Next process state is loaded from PCB
This operation is called Context Switching.
Summary Table
| Topic | Key Point |
|---|---|
| CPU Scheduling | Decides CPU allocation |
| Performance Criteria | Measures scheduling efficiency |
| Process States | Life cycle of a process |
| Process Diagram | State transitions |
| Schedulers | Control process movement |
| PCB | Stores process information |
Process Address Space
Process Address Space is the logical memory space allocated to a process where its program and data reside during execution.
Each process has its own separate address space, ensuring memory protection.
Components of Process Address Space
| Segment | Description |
|---|---|
| Text (Code) | Contains executable instructions |
| Data | Global and static variables |
| Heap | Dynamic memory allocation |
| Stack | Function calls, local variables |
| Memory-Mapped Area | Shared libraries and files |
Diagram (Textual)
Advantages
- Process isolation
- Security
- Efficient memory utilization
- Supports multiprogramming
Process Identification Information
This information uniquely identifies and manages a process in the OS.
Process Identification Fields
| Field | Purpose |
|---|---|
| PID (Process ID) | Unique number for each process |
| PPID (Parent PID) | ID of parent process |
| UID (User ID) | Owner of the process |
| GID (Group ID) | User group information |
| Priority | Scheduling importance |
| Process State | Current execution state |
Why Identification Is Important
- Resource management
- Security and access control
- Process scheduling
- Parent–child relationship tracking
Threads and Their Management
A thread is the smallest unit of execution within a process.
A process can have multiple threads sharing the same address space.
Components of a Thread
| Component | Description |
|---|---|
| Thread ID | |
| Program Counter | |
| Register Set | |
| Stack |
Types of Threads
| Type | Description |
|---|---|
| User-Level Threads | Managed by user library |
| Kernel-Level Threads | Managed by OS kernel |
User vs Kernel Threads
| Feature | User-Level | Kernel-Level |
|---|---|---|
| Management | User space | Kernel |
| Context Switch | Fast | Slower |
| Blocking | Entire process blocks | Only thread blocks |
| OS Support | Not required | Required |
Thread Management Functions
- Thread creation
- Thread termination
- Thread synchronization
- Thread communication
- Thread scheduling
Benefits of Multithreading
- Faster execution
- Better CPU utilization
- Improved responsiveness
- Resource sharing
Scheduling Algorithms
Scheduling algorithms decide which process or thread gets the CPU.
Common Scheduling Algorithms
| Algorithm | Type | Key Feature |
|---|---|---|
| FCFS | Non-preemptive | Simple |
| SJF | Both | Minimum average waiting time |
| Priority | Both | Based on priority |
| Round Robin | Preemptive | Time quantum |
| Multilevel Queue | Both | Multiple queues |
| Multilevel Feedback Queue | Preemptive | Dynamic priority |
Comparison Table
| Algorithm | Starvation | Preemption |
|---|---|---|
| FCFS | No | No |
| SJF | Yes | Optional |
| Priority | Yes | Optional |
| Round Robin | No | Yes |
Multiprocessor Scheduling
Multiprocessor scheduling manages CPU allocation in systems with multiple processors.
Types of Multiprocessor Systems
| Type | Description |
|---|---|
| Asymmetric Multiprocessing (AMP) | One master CPU |
| Symmetric Multiprocessing (SMP) | All CPUs equal |
Scheduling Approaches
| Approach | Description |
|---|---|
| Load Sharing | Tasks shared among CPUs |
| Processor Affinity | Process stays on same CPU |
| Gang Scheduling | Threads scheduled together |
Challenges
- Load balancing
- Cache coherence
- Synchronization overhead
Deadlock
System Model
A system consists of:
- Processes
- Resources (CPU, memory, I/O devices)
Resources can be:
| Type | Example |
|---|---|
| Reusable | CPU, Printer |
| Consumable | Signals, Messages |
Deadlock Characterization
Deadlock is a situation where each process waits indefinitely for resources held by others.
Necessary Conditions (Coffman Conditions)
| Condition | Meaning |
|---|---|
| Mutual Exclusion | One resource at a time |
| Hold and Wait | Hold one, wait for others |
| No Preemption | Resources not forcibly taken |
| Circular Wait | Circular chain of waiting |
All four must exist for deadlock to occur.
Deadlock Prevention
Prevention ensures at least one condition never holds.
| Condition Broken | Method |
|---|---|
| Mutual Exclusion | Make resources sharable |
| Hold and Wait | Request all resources at once |
| No Preemption | Force resource release |
| Circular Wait | Order resource allocation |
Deadlock Avoidance
Avoidance ensures the system never enters an unsafe state.
Key Concept
- Safe state
- Unsafe state
Banker’s Algorithm
- Used for deadlock avoidance
- Requires maximum resource needs in advance
- Checks system safety before allocation
Deadlock Detection
Detection allows deadlock to occur and then detects it.
Methods
| System Type | Method |
|---|---|
| Single instance | Wait-for graph |
| Multiple instances | Detection algorithm |
Recovery from Deadlock
Once detected, system recovers using:
Recovery Methods
| Method | Description |
|---|---|
| Process Termination | Abort one or more processes |
| Resource Preemption | Take resources forcibly |
| Rollback | Restore to safe state |
Factors in Recovery
- Process priority
- Resource usage
- Cost of rollback
Quick Revision Table
| Topic | Key Focus |
|---|---|
| Process Address Space | Logical memory layout |
| Threads | Lightweight execution units |
| Scheduling | CPU allocation |
| Multiprocessor Scheduling | Multi-CPU systems |
| Deadlock | Resource waiting problem |