Recurrent Network and Self Organization Feature Map
Recurrent Network (RNN)
Introduction to Recurrent Network
A Recurrent Network (RNN) is a type of neural network that works with sequence data.
Sequence data means:
Data that comes one after another
Order is important
Examples of sequence:
Words in a sentence
Stock prices over time
Marks of a student in different semesters
👉 Normal neural networks do not remember past data.
👉 RNN remembers
previous information.
Simple idea:
RNN has memory.
Why RNN is Needed?
Some problems depend on past data.
-
We must understand previous information to predict next output.
Example (College life):
Teacher reads: “I am going to the …”
You predict: “college”
You use previous words to guess.
Example (Mobile app):
Google keyboard predicts next word.
It checks previous words.
Basic Architecture of RNN
RNN has:
Input layer
Hidden layer (with memory)
Output layer
The hidden layer:
Takes current input
Also takes previous hidden output
So it remembers past information.
Simple Diagram (Text Form)
Input (Xt) → Hidden (Ht) → Output (Yt)
↑
Previous Hidden (Ht-1)
Types of RNN
1️⃣ One to One
-
One input → One output
Example: Image → Label
2️⃣ One to Many
-
One input → Many outputs
Example: Image → Sentence description
3️⃣ Many to One
-
Many inputs → One output
Example: Movie review → Positive/Negative
4️⃣ Many to Many
-
Many inputs → Many outputs
Example: Language translation
Exam Tip
🔹 RNN is best for time-based or sequence data.
🔹
Hidden layer gives memory feature.
Self-Organizing Feature Map (SOM)
Introduction to SOM
Self-Organizing Feature Map (SOM) is a type of neural network.
It:
Groups similar data together
Creates a 2D map
It works without labeled data.
This means it learns on its own.
👉 This is called unsupervised learning
(Simple meaning:
no teacher, no correct answer given)
Real-Life Example
In a shopping mall:
Similar clothes stay together.
Shoes stay in one section.
Electronics stay in another section.
SOM also groups similar data.
Determining Winner (Winning Neuron)
When input enters:
All neurons compete.
The neuron closest to input wins.
The closest neuron is called Winner neuron.
👉 We measure closeness using distance.
Example (College):
Suppose you compare marks of students.
The student with marks closest to 80 wins.
Kohonen Self Organizing Map Architecture
SOM has:
Input layer
Output layer (2D grid)
Output layer looks like a map:
O O O
O O O
O O O
Each circle is a neuron.
SOM Algorithm (Step-by-Step)
1️⃣ Give input
2️⃣ Calculate distance from all neurons
3️⃣ Find winner
neuron
4️⃣ Update winner and its neighbors
5️⃣ Repeat many times
Over time:
Similar inputs move closer.
Properties of Feature Map
Preserves data structure
Groups similar inputs
Reduces high dimension data to 2D
Example (Social Media):
Instagram shows similar reels together.
Shopping apps show similar products.
Exam Tip
🔹 SOM uses competition.
🔹 It does not need labeled data.
🔹 It
creates clusters.
Learning Vector Quantization (LVQ)
Introduction
LVQ is similar to SOM but it uses labeled data.
That means:
It knows correct category.
👉 This is called supervised learning.
LVQ Architecture
Input layer
Competitive layer
Output layer
Each output neuron represents a class.
Example:
Class A → Science student
Class B → Commerce student
LVQ Algorithm (Steps)
1️⃣ Give input
2️⃣ Find closest neuron
3️⃣ If classification correct →
move it closer
4️⃣ If wrong → move it away
Repeat many times.
Real-Life Example
Teacher checks answer:
If correct → gives extra marks
If wrong → corrects student
LVQ works like teacher correction.
Difference: SOM vs LVQ
| Feature | SOM | LVQ |
|---|---|---|
| Learning type | Unsupervised | Supervised |
| Needs label | No | Yes |
| Output | Map | Class |
Principal Component Analysis (PCA)
Introduction
PCA reduces data size.
It:
Keeps important information
Removes less important data
👉 Used to reduce complexity.
Simple Meaning
Suppose:
You have 10 subjects marks.
You want one overall score.
PCA finds main combination.
Real-Life Example
Mobile camera:
Compress image
Keep quality
Reduce size
Why PCA Matters?
Speeds up learning
Removes noise
Makes data simple
Independent Component Analysis (ICA)
Introduction
ICA separates mixed signals into independent parts.
Independent means:
Not related
Separate source
Real-Life Example
In classroom:
Many students talk.
You focus on your friend’s voice.
ICA separates voices.
Difference Between PCA and ICA
| Feature | PCA | ICA |
|---|---|---|
| Goal | Reduce dimension | Separate signals |
| Focus | Maximum variance | Independence |
| Example | Image compression | Voice separation |
Important Exam Questions
Short Questions
Define RNN.
What is winning neuron?
Difference between SOM and LVQ.
What is PCA?
What is ICA?
Long Questions
Explain RNN architecture with diagram.
Explain SOM algorithm step-by-step.
Compare PCA and ICA.
Explain LVQ with algorithm.
Quick Revision Table
| Topic | Key Idea |
|---|---|
| RNN | Has memory |
| SOM | Groups similar data |
| LVQ | Supervised version of SOM |
| PCA | Reduce data size |
| ICA | Separate mixed signals |
Remember This
✔ RNN → Sequence + Memory
✔ SOM → Unsupervised + Clustering
✔ LVQ →
Supervised + Classification
✔ PCA → Reduce dimension
✔ ICA →
Separate signals
Final Summary
RNN works for sequence data like text and time series.
SOM groups similar data and creates a 2D map.
LVQ improves classification using labels.
PCA reduces data size but keeps important parts.
ICA separates mixed signals.
These topics are important in neural networks and machine learning.
Understand
concepts with examples.
Revise tables before exam.