Digital Image Fundamentals



Digital Image Fundamentals

Digital Image Processing (DIP) deals with manipulating digital images using computers to improve image quality or extract useful information.

Digital Image Fundamentals

Steps in Digital Image Processing

The digital image processing system follows a sequence of steps from image capture to decision-making.

Main Steps

StepDescriptionExample
Image AcquisitionCapturing the image using a sensorCamera captures a photo
Image EnhancementImproving image qualityIncreasing brightness
Image RestorationRemoving noise or blurNoise removal
Color Image ProcessingProcessing color imagesRGB image editing
Wavelets & CompressionReducing image sizeJPEG compression
Morphological ProcessingShape-based processingObject boundary detection
SegmentationDividing image into regionsFace detection
Representation & DescriptionFeature extractionShape, texture
Recognition & InterpretationIdentifying objectsOCR recognition
Knowledge BaseStores rules/dataAI model database

Exam Note: These steps are not always sequential; some may be skipped depending on the application.

Components of Digital Image Processing System

A digital image processing system consists of hardware and software elements.

System Components

ComponentFunction
Image SensorConverts light into electrical signals
DigitizerConverts analog signal into digital form
ComputerProcesses image data
Image Processing SoftwareAlgorithms and tools
Mass StorageStores images
Display DevicesMonitor, printer
NetworkingImage transmission

Block Diagram (Conceptual)

Image Sensor → Digitizer → Computer → Output Device

Elements of Visual Perception

Visual perception refers to how humans interpret visual information.

Human Eye Structure

PartFunction
RetinaReceives image
RodsLow-light vision
ConesColor vision
Optic NerveSends signals to brain

Brightness Adaptation

The human eye can adjust to different light levels.

Example:

  • Coming from sunlight into a dark room
  • Eyes take time to adjust

Optical Illusions

Our brain may misinterpret images.

Example: Same color looks different on dark and light backgrounds

Importance in DIP

  • Helps in contrast enhancement
  • Used in medical image display
  • Improves image visualization

Image Sensing and Acquisition

This is the first step in digital image processing.

Image acquisition is the process of capturing an image and converting it into digital form.

Image Sensing

Uses sensors to capture images.

Sensor TypeApplication
CCD (Charge Coupled Device)Digital cameras
CMOS SensorMobile phones
Infrared SensorsNight vision
X-ray SensorsMedical imaging

Image Acquisition Process

  • Light reflected from object
  • Sensor converts light into electrical signal
  • Digitizer converts signal to digital image

Real-Life Example

  • Mobile camera capturing a photo
  • MRI scan capturing brain image

Image Sampling and Quantization

These processes convert an analog image into a digital image.

Image Sampling

Sampling converts a continuous image into discrete pixels.

Explanation

  • Image is divided into a grid
  • Each grid point = pixel

Sampling Rate Effect

Sampling RateImage Quality
HighClear image
LowBlurred image

Example

  • High-resolution image = more pixels
  • Low-resolution image = fewer pixels

Image Quantization

Quantization assigns intensity values to each pixel.

Explanation

  • Converts continuous intensity into discrete levels

Quantization Levels

LevelsQuality
256 levels (8-bit)High quality
16 levelsPoor quality

Sampling vs Quantization

FeatureSamplingQuantization
ConvertsSpaceIntensity
ResultPixelsGray levels
AffectsResolutionContrast

Relationship Between Sampling & Quantization

Digital Image = Sampling (Space) + Quantization (Intensity)

Mathematically: Digital Image → f(x, y)

Applications of Digital Image Processing

FieldApplication
MedicalMRI, CT Scan
SecurityFace recognition
SatelliteWeather forecasting
IndustryQuality inspection
AI & MLObject detection

Relationships Between Pixels

Pixel relationships define how a pixel interacts with its neighboring pixels and are essential for image analysis, segmentation, and enhancement.

Types of Pixel Neighbors

For a pixel p(x, y):

4-Neighborhood (N₄)

Pixels sharing a common edge.

N4(p)={(x+1,y),(x1,y),(x,y+1),(x,y1)}

Diagonal Neighborhood (Nᴅ)

Pixels sharing a common corner.

ND(p)={(x+1,y+1),(x+1,y1),(x1,y+1),(x1,y1)}

8-Neighborhood (N₈)

Combination of 4-neighbors and diagonal neighbors.

N8(p)=N4(p)ND(p)

Comparison Table

NeighborhoodConnectivityApplication
4-neighborEdgeSimple segmentation
DiagonalCornerPattern recognition
8-neighborEdge + CornerObject detection

Adjacency of Pixels

Adjacency defines whether two pixels are connected.

Types of Adjacency

TypeDescription
4-adjacencyPixels share edge
8-adjacencyPixels share edge or corner
m-adjacencyModified adjacency to avoid ambiguity

Distance Between Pixels

Distance Measures

Distance TypeFormula
Euclidean(x1x2)2+(y1y2)2\sqrt{(x₁-x₂)^2 + (y₁-y₂)^2}
City Block (D₄)(
Chessboard (D₈)(\max(

Color Image Fundamentals

A color image is composed of multiple channels representing different color components.

Color Representation

Color perception depends on:

  • Brightness
  • Hue
  • Saturation

Color Models

A color model defines how colors are represented numerically.

RGB Color Model

RGB is an additive color model based on Red, Green, and Blue components.

Characteristics

FeatureDescription
TypeAdditive
ComponentsR, G, B
Value Range0–255
Used InCameras, monitors

RGB Color Cube

  • Black → (0,0,0)
  • White → (255,255,255)
  • Red → (255,0,0)

Advantages & Limitations

AdvantagesLimitations
Simple hardwareNot intuitive for humans
Direct displayPoor for color editing

HSI Color Model

HSI represents colors in terms of Hue, Saturation, and Intensity, matching human perception.

Components

ComponentMeaning
Hue (H)Color type (0°–360°)
Saturation (S)Purity of color
Intensity (I)Brightness

HSI Color Space Shape

  • Hue → Angle
  • Saturation → Radius
  • Intensity → Vertical axis

RGB vs HSI

FeatureRGBHSI
User-friendlyNoYes
Hardware orientedYesNo
Image enhancementDifficultEasy

Two-Dimensional Mathematical Preliminaries

These mathematical tools form the foundation of digital image processing.

Image as a Function

A digital image is represented as:

f(x,y)

Where:

  • x, y → spatial coordinates
  • f → intensity value

Common 2D Functions

OperationDescription
AdditionImage blending
MultiplicationContrast control
ConvolutionFiltering
CorrelationPattern matching

Convolution in 2D

g(x,y)=f(x,y)h(x,y)

Where:

  • f → input image
  • h → filter mask
  • g → output image

Two-Dimensional Transforms

Transforms convert images from spatial domain to frequency domain.

Discrete Fourier Transform (DFT)

DFT decomposes an image into its frequency components.

2D DFT Equation

F(u,v)=x=0M1y=0N1f(x,y)ej2π(uxM+vyN)

Properties of DFT

PropertyDescription
LinearityOutput linear
PeriodicityFrequency repetition
SymmetryComplex conjugate
Shift propertySpatial shift affects phase

Applications

  • Image filtering
  • Noise removal
  • Edge detection

Discrete Cosine Transform (DCT)

DCT converts image into cosine frequency components only.

2D DCT Equation

C(u,v)=α(u)α(v)x=0M1y=0N1f(x,y)cos[(2x+1)uπ2M]cos[(2y+1)vπ2N]

Advantages of DCT

FeatureBenefit
Energy compactionMost energy in low frequency
Real valuesEasy computation
Compression friendlyJPEG standard

DFT vs DCT

FeatureDFTDCT
OutputComplexReal
Used inFilteringCompression
Boundary effectsHighLow

Practical Applications

TechniqueApplication
Pixel adjacencySegmentation
RGB/HSIColor enhancement
DFTFrequency filtering
DCTImage compression