Basic Model and Perceptron networks
Basic Models in Neural Networks
A neural network (NN) is a computer model that works like a human brain.
It learns from examples.
It finds patterns.
It makes decisions.
Example:
Face unlock in mobile
YouTube recommending videos
Spam email detection
McCulloch–Pitts Neuron Model
This is the first simple brain model.
Scientists McCulloch and Pitts created it in 1943.
It is a simple mathematical model of a neuron (brain cell).
Structure of McCulloch–Pitts Neuron
It has:
Inputs (x1, x2, x3…)
Weights (importance of each input)
Summation (adds all inputs)
Threshold (minimum value needed)
Output (0 or 1)
Simple flow:
Input → Multiply by weight → Add → Check threshold → Output
How It Works (Step-by-step)
Take inputs.
Multiply each input with weight.
Add all values.
Compare with threshold.
-
If sum ≥ threshold → output = 1
If sum < threshold → output = 0
Real-Life Example
Think about college attendance rule:
Condition 1: 75% attendance
Condition 2: Fees paid
If both true → You can sit in exam (Output = 1)
If not → No exam (Output
= 0)
This is like McCulloch-Pitts model.
Important Points (Exam)
Output is binary (0 or 1).
It works only for simple problems.
It cannot solve complex patterns.
Hebb Net (Hebbian Learning)
Hebb gave a simple learning rule.
Rule says:
"If two neurons activate together, connection becomes stronger."
In simple words:
When two things happen together many times, the link
becomes strong.
Hebb Rule Formula (Simple Meaning)
Weight increases when:
Input = 1
Output = 1
Real-Life Example
College example:
-
If you study daily and get good marks,
your brain connects “study” with “success”.
Connection becomes strong.
Mobile example:
-
If you watch cooking videos daily,
YouTube shows more cooking videos.
Remember This (Exam Tip)
Hebb rule:
"Neurons that fire together, wire together."
Activation Functions
Activation function decides the final output.
It answers:
Should neuron activate or not?
Why We Need Activation Function?
Without it:
Output can be very large.
Network becomes unstable.
Types of Activation Functions
1. Step Function
Output = 0 or 1
Used in simple models
Example:
Pass or Fail result.
2. Sigmoid Function
Output between 0 and 1
Smooth curve
Used in:
Probability problems
Example:
Spam email detection (probability 0 to 1)
3. ReLU Function
ReLU means Rectified Linear Unit.
If input > 0 → output = input
If input < 0 → output = 0
Used in:
Deep learning models
Example:
Instagram face filter detection.
Quick Comparison Table
| Function | Output Range | Use |
|---|---|---|
| Step | 0 or 1 | Simple classification |
| Sigmoid | 0 to 1 | Probability problems |
| ReLU | 0 to ∞ | Deep learning |
Aggregation Functions
Aggregation means combining inputs.
Most common method:
Weighted sum (multiply and add)
Formula idea:
Sum = x1w1 + x2w2 + x3w3
Real-Life Example
Shopping example:
You buy:
2 kg rice × ₹40
1 kg sugar × ₹50
Total bill = (2×40) + (1×50)
This is aggregation.
Perceptron Networks
Perceptron is an improved neuron model.
Frank Rosenblatt introduced it.
It can learn from data.
Perceptron Learning Rule
Perceptron adjusts weights when it makes mistake.
Steps:
Give input.
Check output.
Compare with correct answer.
If wrong → update weight.
Real-Life Example
Teacher checks your answer:
If wrong → teacher corrects you.
Next time you answer correctly.
Weight Update Idea (Simple)
New weight = Old weight + Learning rate × Error × Input
Learning rate means:
How fast the model learns.
Exam Tip
Perceptron works only for linearly separable problems.
Single Layer Perceptron Network
It has:
One input layer
One output layer
No hidden layer
Used for:
Simple yes/no problems
Real-Life Example
Spam or Not Spam email.
Pass or Fail.
Multilayer Perceptron (MLP)
MLP has:
Input layer
One or more hidden layers
Output layer
Hidden layer means:
Extra processing layer.
Why Hidden Layer?
It helps solve complex problems.
Real-Life Example
Face recognition:
Input → Image
Hidden layers → Detect eyes, nose
Output → Person name
Difference Table
| Feature | Single Layer | Multi Layer |
|---|---|---|
| Hidden layer | No | Yes |
| Complex problems | No | Yes |
| XOR problem | Cannot solve | Can solve |
Least Mean Square (LMS) Algorithm
LMS means:
Reduce average error step by step.
It adjusts weights slowly.
Goal:
Minimize error.
Real-Life Example
You practice math daily.
Each day your mistake reduces.
Gradually you improve.
Gradient Descent Rule
Gradient means slope.
Descent means going down.
It moves towards minimum error.
Think of:
Walking down a hill to reach lowest point.
Steps
Calculate error.
Find direction to reduce error.
Update weights.
Repeat.
Mobile Example
Google Maps finding shortest path.
It tries different routes and picks best one.
Nonlinearly Separable Problems
Some problems cannot separate with straight line.
Example:
XOR problem.
XOR Problem
Input:
0 0 → 0
0 1 → 1
1 0 → 1
1 1 → 0
Single layer perceptron fails here.
MLP solves it.
Real-Life Example
College admission:
Selection depends on:
Marks AND interview AND sports.
Cannot decide with one simple rule.
Benchmark Problems in Neural Networks
Benchmark means standard test problem.
Used to check performance.
Examples:
XOR problem
Digit recognition
Image classification
Why Important?
Compare different models.
Check accuracy.
Important Points for Exam
McCulloch-Pitts gives binary output.
Hebb rule strengthens connection.
Activation function decides output.
Perceptron works for linearly separable problems.
MLP solves nonlinear problems.
Gradient descent reduces error.
Possible Exam Questions
Short Questions (2–5 Marks)
Define McCulloch-Pitts model.
What is Hebb rule?
What is activation function?
Define perceptron.
What is XOR problem?
Long Questions (8–15 Marks)
Explain perceptron learning rule with example.
Compare single layer and multilayer perceptron.
Explain gradient descent rule.
Explain LMS algorithm.
Discuss nonlinear separable problems.
Quick Revision Table
| Topic | Key Idea |
|---|---|
| McCulloch-Pitts | Simple binary neuron |
| Hebb Net | Strengthen active connections |
| Activation Function | Controls output |
| Perceptron | Learns by correcting error |
| LMS | Reduce average error |
| Gradient Descent | Move toward minimum error |
| XOR | Needs multilayer network |
Final Summary for Revision
Neural networks copy brain working.
McCulloch-Pitts is basic model.
Hebb rule increases connection strength.
Activation function controls output.
Perceptron learns from mistakes.
Single layer solves simple problems.
Multilayer solves complex problems.
Gradient descent reduces error step by step.
XOR is nonlinear problem.
👉 Focus on:
Definitions
Differences
Learning rules
XOR example
Revise tables before exam.