Unit 4: Memory Management




Memory Management

Memory Management is the function of the Operating System that manages main memory (RAM) by keeping track of memory usage, allocation, protection, and deallocation.

Basic Bare Machine

A bare machine is a computer system without any operating system.

In this system:

  • Only one program can run at a time
  • Programmer directly controls hardware
  • No memory protection or multitasking

Characteristics

FeatureDescription
OSNot present
User ProgramsSingle
Memory AllocationManual
CPU UtilizationPoor
ProtectionNone

Memory Structure

----------------- | Program | ----------------- | Free Memory | -----------------

Limitations

  • No multitasking
  • No security
  • Inefficient CPU use
  • Difficult to manage resources

Resident Monitor

A Resident Monitor is a small program that resides permanently in memory and controls program execution.

It is the first step toward an operating system.

Functions of Resident Monitor

FunctionDescription
Job LoadingLoads programs into memory
Job ExecutionStarts program execution
Job SequencingExecutes jobs one after another
Error HandlingHandles basic errors

Memory Layout

----------------- | Resident | | Monitor | ----------------- | User Program | -----------------

Advantages

  • Automatic job execution
  • Reduced manual intervention
  • Better CPU utilization than bare machine

Multiprogramming with Fixed Partitions

Memory is divided into fixed-size partitions, and one process occupies one partition.

Partition Types

TypeDescription
Equal SizeAll partitions same size
Unequal SizePartitions of different sizes

Memory Layout

----------------- | OS | ----------------- | Partition 1 | ----------------- | Partition 2 | ----------------- | Partition 3 | -----------------

Allocation Methods

  • First Fit
  • Best Fit
  • Worst Fit

Advantages

Benefit
Simple implementation
Supports multiprogramming

Disadvantages

IssueExplanation
Internal FragmentationWasted memory inside partition
Limited processesLimited by number of partitions
InflexiblePartition size fixed

Multiprogramming with Variable Partitions

Memory is divided dynamically according to process size.

Each process gets exact memory it requires.

Memory Allocation Methods

MethodDescription
First FitFirst available block
Best FitSmallest suitable block
Worst FitLargest available block

Memory Layout

----------------- | OS | ----------------- | Process A | ----------------- | Free Space | ----------------- | Process B | -----------------

Advantages

Benefit
No internal fragmentation
Better memory utilization
Flexible

Disadvantages

IssueExplanation
External FragmentationFree space scattered
Compaction requiredTime-consuming
Complex management

Compaction

Compaction shifts processes to combine free memory into one large block.

Protection Schemes

Protection schemes prevent unauthorized access to memory and ensure process isolation.

Why Protection Is Needed

  • Prevents one process from overwriting another
  • Enhances security
  • Ensures system stability

Types of Protection Schemes

1. Fence Register

  • Defines a boundary between OS and user memory
  • Simple but limited

2. Base and Limit Registers

RegisterFunction
BaseStarting address of process
LimitSize of process memory

Condition:

Base ≤ Address ≤ Base + Limit

3. Relocation Register

  • Allows programs to run anywhere in memory
  • Supports dynamic relocation

4. Hardware Protection (MMU)

FeatureDescription
Address translation
Access control
Memory isolation

5. Protection Bits

BitMeaning
ReadAllow read
WriteAllow write
ExecuteAllow execution

Comparison Table

SchemeFragmentationFlexibility
Bare MachineNoneVery Low
Resident MonitorNoneLow
Fixed PartitionsInternalLow
Variable PartitionsExternalHigh

Exam-Friendly Summary

TopicKey Point
Bare MachineNo OS
Resident MonitorFirst OS
Fixed PartitionStatic memory
Variable PartitionDynamic memory
ProtectionSecurity & isolation

Paging

Paging is a memory management technique in which:

  • Logical memory is divided into pages
  • Physical memory is divided into frames
  • Page size = Frame size

Paging Address Structure

Logical Address = Page Number + Offset

FieldPurpose
Page NumberIndex of page table
OffsetExact location inside page

Page Table

Stores mapping between page number and frame number.

Page NoFrame No
05
12

Advantages

  • Eliminates external fragmentation
  • Efficient memory utilization
  • Supports virtual memory

Disadvantages

  • Page table overhead
  • Internal fragmentation possible

Segmentation

Segmentation divides a program into logical segments such as:

  • Code
  • Data
  • Stack
  • Heap

Logical Address Format

Logical Address = Segment Number + Offset

Segment Table

SegmentBaseLimit
Code1000400
Data2000300

Advantages

  • Logical memory view
  • Better protection and sharing
  • Supports modular programming

Disadvantages

  • External fragmentation
  • Complex memory management

Paged Segmentation

Paged segmentation is a hybrid technique combining:

  • Logical view of segmentation
  • Physical allocation of paging

Address Structure

Logical Address = Segment No + Page No + Offset

Working

  1. Segment table gives base address of page table
  2. Page table gives frame number
  3. Offset gives exact address

Advantages

  • No external fragmentation
  • Logical structure maintained
  • Efficient and secure

Disadvantages

  • Complex implementation
  • Higher memory overhead

Virtual Memory Concepts

Virtual memory allows execution of programs larger than physical memory.

Only required pages are loaded into memory; remaining pages stay on disk.

Key Concepts

TermMeaning
Virtual AddressGenerated by CPU
Physical AddressActual memory address
Backing StoreDisk storage
Page FaultPage not in memory

Benefits

  • Larger program execution
  • Better memory utilization
  • Increased multiprogramming

Demand Paging

Demand paging loads pages only when they are needed.

If a page is not in memory → Page Fault occurs.

Steps in Demand Paging

  1. CPU generates address
  2. Page table checks validity bit
  3. Page fault if page absent
  4. OS fetches page from disk
  5. Page loaded into memory
  6. Execution resumes

Advantages

  • Less I/O
  • Faster startup
  • Efficient memory use

Performance of Demand Paging

Effective Access Time (EAT)

EAT=(1p)×MA+p×PF

Where:

  • p = page fault rate
  • MA = memory access time
  • PF = page fault service time

Factors Affecting Performance

  • Page fault rate
  • Disk access speed
  • Page replacement algorithm
  • Working set size

Page Replacement Algorithms

When memory is full and a page fault occurs, OS must replace a page.

Common Algorithms

AlgorithmKey IdeaIssue
FIFOFirst loaded page replacedBelady’s anomaly
LRULeast recently used pageImplementation cost
OptimalReplace future unused pageNot practical
LFULeast frequently usedOld pages stay
ClockApproximate LRUEfficient

Comparison Table

AlgorithmPage FaultsPractical
FIFOHighYes
LRULowYes
OptimalLowestNo

Thrashing

Thrashing occurs when:

  • System spends more time swapping pages than executing processes

Causes

  • High degree of multiprogramming
  • Insufficient memory
  • Poor page replacement

Effects

  • Very low CPU utilization
  • Performance degradation

Prevention Techniques

  • Reduce multiprogramming
  • Working set model
  • Page fault frequency control

Cache Memory Organization

Cache memory is a small, fast memory placed between CPU and main memory.

Cache Mapping Techniques

TechniqueDescription
Direct MappingOne block per cache line
Associative MappingAny block anywhere
Set-AssociativeCombination of both

Cache Performance Metrics

  • Hit ratio
  • Miss penalty
  • Access time

Locality of Reference

Locality of reference means that programs tend to access:

  • Same data repeatedly
  • Nearby memory locations

Types of Locality

TypeDescription
TemporalSame data reused
SpatialNearby data accessed

Importance

  • Basis of cache memory
  • Improves paging efficiency
  • Reduces page faults

Quick Revision Table

TopicKey Focus
PagingFixed-size blocks
SegmentationLogical division
Virtual MemoryLarge program support
Demand PagingLoad on demand
ThrashingExcessive paging
CacheSpeed improvement
LocalityAccess pattern