Unit 4: Memory Management
Memory Management
Memory Management is the function of the Operating System that manages main memory (RAM) by keeping track of memory usage, allocation, protection, and deallocation.
Basic Bare Machine
A bare machine is a computer system without any operating system.
In this system:
- Only one program can run at a time
- Programmer directly controls hardware
- No memory protection or multitasking
Characteristics
| Feature | Description |
|---|---|
| OS | Not present |
| User Programs | Single |
| Memory Allocation | Manual |
| CPU Utilization | Poor |
| Protection | None |
Memory Structure
Limitations
- No multitasking
- No security
- Inefficient CPU use
- Difficult to manage resources
Resident Monitor
A Resident Monitor is a small program that resides permanently in memory and controls program execution.
It is the first step toward an operating system.
Functions of Resident Monitor
| Function | Description |
|---|---|
| Job Loading | Loads programs into memory |
| Job Execution | Starts program execution |
| Job Sequencing | Executes jobs one after another |
| Error Handling | Handles basic errors |
Memory Layout
Advantages
- Automatic job execution
- Reduced manual intervention
- Better CPU utilization than bare machine
Multiprogramming with Fixed Partitions
Memory is divided into fixed-size partitions, and one process occupies one partition.
Partition Types
| Type | Description |
|---|---|
| Equal Size | All partitions same size |
| Unequal Size | Partitions of different sizes |
Memory Layout
Allocation Methods
- First Fit
- Best Fit
- Worst Fit
Advantages
| Benefit |
|---|
| Simple implementation |
| Supports multiprogramming |
Disadvantages
| Issue | Explanation |
|---|---|
| Internal Fragmentation | Wasted memory inside partition |
| Limited processes | Limited by number of partitions |
| Inflexible | Partition size fixed |
Multiprogramming with Variable Partitions
Memory is divided dynamically according to process size.
Each process gets exact memory it requires.
Memory Allocation Methods
| Method | Description |
|---|---|
| First Fit | First available block |
| Best Fit | Smallest suitable block |
| Worst Fit | Largest available block |
Memory Layout
Advantages
| Benefit |
|---|
| No internal fragmentation |
| Better memory utilization |
| Flexible |
Disadvantages
| Issue | Explanation |
|---|---|
| External Fragmentation | Free space scattered |
| Compaction required | Time-consuming |
| Complex management |
Compaction
Compaction shifts processes to combine free memory into one large block.
Protection Schemes
Protection schemes prevent unauthorized access to memory and ensure process isolation.
Why Protection Is Needed
- Prevents one process from overwriting another
- Enhances security
- Ensures system stability
Types of Protection Schemes
1. Fence Register
- Defines a boundary between OS and user memory
- Simple but limited
2. Base and Limit Registers
| Register | Function |
|---|---|
| Base | Starting address of process |
| Limit | Size of process memory |
Condition:
3. Relocation Register
- Allows programs to run anywhere in memory
- Supports dynamic relocation
4. Hardware Protection (MMU)
| Feature | Description |
|---|---|
| Address translation | |
| Access control | |
| Memory isolation |
5. Protection Bits
| Bit | Meaning |
|---|---|
| Read | Allow read |
| Write | Allow write |
| Execute | Allow execution |
Comparison Table
| Scheme | Fragmentation | Flexibility |
|---|---|---|
| Bare Machine | None | Very Low |
| Resident Monitor | None | Low |
| Fixed Partitions | Internal | Low |
| Variable Partitions | External | High |
Exam-Friendly Summary
| Topic | Key Point |
|---|---|
| Bare Machine | No OS |
| Resident Monitor | First OS |
| Fixed Partition | Static memory |
| Variable Partition | Dynamic memory |
| Protection | Security & isolation |
Paging
Paging is a memory management technique in which:
- Logical memory is divided into pages
- Physical memory is divided into frames
- Page size = Frame size
Paging Address Structure
Logical Address = Page Number + Offset
| Field | Purpose |
|---|---|
| Page Number | Index of page table |
| Offset | Exact location inside page |
Page Table
Stores mapping between page number and frame number.
| Page No | Frame No |
|---|---|
| 0 | 5 |
| 1 | 2 |
Advantages
- Eliminates external fragmentation
- Efficient memory utilization
- Supports virtual memory
Disadvantages
- Page table overhead
- Internal fragmentation possible
Segmentation
Segmentation divides a program into logical segments such as:
- Code
- Data
- Stack
- Heap
Logical Address Format
Logical Address = Segment Number + Offset
Segment Table
| Segment | Base | Limit |
|---|---|---|
| Code | 1000 | 400 |
| Data | 2000 | 300 |
Advantages
- Logical memory view
- Better protection and sharing
- Supports modular programming
Disadvantages
- External fragmentation
- Complex memory management
Paged Segmentation
Paged segmentation is a hybrid technique combining:
- Logical view of segmentation
- Physical allocation of paging
Address Structure
Logical Address = Segment No + Page No + Offset
Working
- Segment table gives base address of page table
- Page table gives frame number
- Offset gives exact address
Advantages
- No external fragmentation
- Logical structure maintained
- Efficient and secure
Disadvantages
- Complex implementation
- Higher memory overhead
Virtual Memory Concepts
Virtual memory allows execution of programs larger than physical memory.
Only required pages are loaded into memory; remaining pages stay on disk.
Key Concepts
| Term | Meaning |
|---|---|
| Virtual Address | Generated by CPU |
| Physical Address | Actual memory address |
| Backing Store | Disk storage |
| Page Fault | Page not in memory |
Benefits
- Larger program execution
- Better memory utilization
- Increased multiprogramming
Demand Paging
Demand paging loads pages only when they are needed.
If a page is not in memory → Page Fault occurs.
Steps in Demand Paging
- CPU generates address
- Page table checks validity bit
- Page fault if page absent
- OS fetches page from disk
- Page loaded into memory
- Execution resumes
Advantages
- Less I/O
- Faster startup
- Efficient memory use
Performance of Demand Paging
Effective Access Time (EAT)
Where:
- p = page fault rate
- MA = memory access time
- PF = page fault service time
Factors Affecting Performance
- Page fault rate
- Disk access speed
- Page replacement algorithm
- Working set size
Page Replacement Algorithms
When memory is full and a page fault occurs, OS must replace a page.
Common Algorithms
| Algorithm | Key Idea | Issue |
|---|---|---|
| FIFO | First loaded page replaced | Belady’s anomaly |
| LRU | Least recently used page | Implementation cost |
| Optimal | Replace future unused page | Not practical |
| LFU | Least frequently used | Old pages stay |
| Clock | Approximate LRU | Efficient |
Comparison Table
| Algorithm | Page Faults | Practical |
|---|---|---|
| FIFO | High | Yes |
| LRU | Low | Yes |
| Optimal | Lowest | No |
Thrashing
Thrashing occurs when:
-
System spends more time swapping pages than executing processes
Causes
- High degree of multiprogramming
- Insufficient memory
- Poor page replacement
Effects
- Very low CPU utilization
- Performance degradation
Prevention Techniques
- Reduce multiprogramming
- Working set model
- Page fault frequency control
Cache Memory Organization
Cache memory is a small, fast memory placed between CPU and main memory.
Cache Mapping Techniques
| Technique | Description |
|---|---|
| Direct Mapping | One block per cache line |
| Associative Mapping | Any block anywhere |
| Set-Associative | Combination of both |
Cache Performance Metrics
- Hit ratio
- Miss penalty
- Access time
Locality of Reference
Locality of reference means that programs tend to access:
- Same data repeatedly
- Nearby memory locations
Types of Locality
| Type | Description |
|---|---|
| Temporal | Same data reused |
| Spatial | Nearby data accessed |
Importance
- Basis of cache memory
- Improves paging efficiency
- Reduces page faults
Quick Revision Table
| Topic | Key Focus |
|---|---|
| Paging | Fixed-size blocks |
| Segmentation | Logical division |
| Virtual Memory | Large program support |
| Demand Paging | Load on demand |
| Thrashing | Excessive paging |
| Cache | Speed improvement |
| Locality | Access pattern |