Knowledge Representation and Reasoning
Introduction to Knowledge Representation and Reasoning (KRR)
Knowledge Representation and Reasoning is a core area of Artificial Intelligence that deals with:
- Representing information about the real world in a machine-readable form
- Reasoning (drawing conclusions) from the represented knowledge
Objectives of KRR
- Store knowledge efficiently
- Perform logical reasoning
- Support intelligent decision-making
KRR in AI Applications
| Area | Example |
|---|---|
| Expert Systems | Medical diagnosis |
| NLP | Language understanding |
| Robotics | Environment reasoning |
| Planning | Automated scheduling |
Propositional Logic
Propositional Logic is the simplest form of logic where statements are either true (T) or false (F).
Basic Elements
| Element | Description |
|---|---|
| Proposition | A declarative statement |
| Logical Connectives | AND (∧), OR (∨), NOT (¬), IMPLIES (→) |
| Truth Values | True or False |
Example
- P: "It is raining"
- Q: "Road is wet"
- Expression: P → Q
Truth Table for AND
| P | Q | P ∧ Q |
|---|---|---|
| T | T | T |
| T | F | F |
| F | T | F |
| F | F | F |
Advantages
- Simple
- Easy to implement
Limitations
- Cannot represent relationships
- No variables or quantifiers
Predicate Logic
Predicate Logic extends propositional logic by introducing:
- Predicates
- Variables
- Quantifiers
Structure
Predicate(subject)
Example: Human(Socrates)
Quantifiers
| Quantifier | Symbol | Meaning |
|---|---|---|
| Universal | ∀ | For all |
| Existential | ∃ | There exists |
Example
∀x Human(x) → Mortal(x)
Advantages
More expressive than propositional logic
First Order Logic (FOL)
First Order Logic is the most widely used logic in AI.
Components of FOL
| Component | Description |
|---|---|
| Constants | Specific objects |
| Variables | General objects |
| Predicates | Properties/relations |
| Functions | Mapping objects |
| Quantifiers | ∀, ∃ |
Example
King(John)
∀x King(x) → Person(x)
Inference in First Order Logic
Inference is the process of deriving new knowledge from existing facts.
Common Inference Techniques
| Technique | Description |
|---|---|
| Modus Ponens | If P → Q and P, then Q |
| Unification | Matching variables |
| Forward Chaining | Data-driven reasoning |
| Backward Chaining | Goal-driven reasoning |
Example (Modus Ponens)
P → Q
P
----
Q
Clause Form Conversion
Clause form (or CNF – Conjunctive Normal Form) is required for resolution.
Steps in Clause Form Conversion
- Eliminate implications
- Move negation inward
- Standardize variables
- Skolemization
- Drop universal quantifiers
- Convert to CNF
Example
∀x (Human(x) → Mortal(x))
Converted to:
¬Human(x) ∨ Mortal(x)
Resolution
Resolution is a sound and complete inference method used in FOL.
Resolution Principle
Combine two clauses with complementary literals
Example
Clause 1: (A ∨ B)
Clause 2: (¬A ∨ C)
Resolved Clause: (B ∨ C)
Resolution Process Diagram
(A ∨ B)
(¬A ∨ C)
|
(B ∨ C)
Advantages
- Complete inference mechanism
- Used in automated theorem proving
Comparison Table
| Logic Type | Expressiveness | Complexity |
|---|---|---|
| Propositional | Low | Low |
| Predicate | Medium | Medium |
| First Order | High | High |
Exam-Oriented Notes (MCA)
- Write syntax and example clearly
- Draw resolution steps
- Mention Skolemization steps
- Compare propositional vs FOL
- Use tables wherever possible
Chaining, Utility Theory and Probabilistic Reasoning
Chaining – Concept
Chaining is a reasoning technique used in Artificial Intelligence and Expert Systems to derive conclusions from known facts using a set of rules.
Chaining works on IF–THEN rules:
IF condition THEN action
Purpose of Chaining
- To infer new facts
- To reach a goal state logically
- To support automated decision-making
Types of Chaining
| Type | Approach |
|---|---|
| Forward Chaining | Data-driven |
| Backward Chaining | Goal-driven |
Forward Chaining
Forward chaining starts with known facts and applies rules to infer new facts until the goal is reached.
Working of Forward Chaining
- Start with initial facts
- Match facts with rule conditions
- Fire applicable rules
- Add new facts to knowledge base
- Stop when goal is achieved
Example
Rule: IF Fever THEN Infection
Fact: Fever
Conclusion: Infection
Forward Chaining Diagram
Facts → Rules → New Facts → Goal
Applications
- Expert systems
- Medical diagnosis
- Fault detection
Advantages & Limitations
| Advantages | Limitations |
|---|---|
| Automatic reasoning | May generate unnecessary facts |
| Suitable for dynamic data | Slower for specific goals |
Backward Chaining
Backward chaining starts with a goal and works backward to determine whether known facts support the goal.
Working of Backward Chaining
- Identify the goal
- Find rules that conclude the goal
- Check if rule conditions are satisfied
- Prove conditions using facts
Example
Goal: Infection
Rule: IF Fever THEN Infection
Check: Is Fever true?
Backward Chaining Diagram
Goal ← Rules ← Facts
Applications
- Prolog
- Diagnosis systems
- Query-based systems
Advantages & Limitations
| Advantages | Limitations |
|---|---|
| Efficient for specific goals | Requires predefined goals |
| Less computation | Not suitable for all problems |
Utility Theory
Utility theory deals with decision-making under uncertainty by assigning numerical values (utilities) to outcomes.
Key Concepts
| Term | Meaning |
|---|---|
| Utility | Measure of satisfaction or preference |
| Rational Agent | Maximizes expected utility |
| Preference | Ranking of outcomes |
Utility Function
U(outcome) → Real Number
Example
Choosing job offers based on salary, location, growth
Importance
Helps AI agents make optimal decisions
Used in economics and game theory
Probabilistic Reasoning
Probabilistic reasoning handles uncertainty using probability theory.
Why Probability?
- Real-world data is incomplete
- Information may be noisy
- Outcomes are uncertain
Basic Probability Rules
| Rule | Formula |
|---|---|
| Conditional Probability | P(A |
| Bayes’ Theorem | P(A |
Hidden Markov Model (HMM)
A Hidden Markov Model is a probabilistic model where:
- States are hidden
- Observations are visible
Components of HMM
| Component | Description |
|---|---|
| States | Hidden states |
| Observations | Visible outputs |
| Transition Probabilities | State change probability |
| Emission Probabilities | Observation likelihood |
| Initial State | Starting probability |
HMM Structure Diagram
State1 → State2 → State3
↓ ↓ ↓
Obs1 Obs2 Obs3
Applications
- Speech recognition
- Weather prediction
- Bioinformatics
Bayesian Networks
Bayesian Networks are graphical models that represent probabilistic relationships among variables.
Structure
- Nodes → Random variables
- Edges → Dependencies
- Directed Acyclic Graph (DAG)
Example
Rain → Traffic → Late
Bayesian Network Diagram
Rain
↓
Traffic
↓
Late
Advantages
| Advantage | Description |
|---|---|
| Handles uncertainty | Real-world modeling |
| Modular | Easy to update |
| Efficient inference | Probabilistic reasoning |
Comparison Table
| Technique | Based On | Application |
|---|---|---|
| Forward Chaining | Rules & facts | Expert systems |
| Backward Chaining | Goals | Prolog |
| Utility Theory | Preferences | Decision-making |
| HMM | Hidden states | Speech, prediction |
| Bayesian Network | Probability graph | Diagnosis |
MCA Exam-Oriented Tips
- Draw forward & backward chaining diagrams
- Write Bayes’ theorem formula clearly
- Mention HMM components explicitly
- Compare HMM vs Bayesian Network
- Use examples for utility theory
End of Notes