Knowledge Representation and Reasoning



Introduction to Knowledge Representation and Reasoning (KRR)

Knowledge Representation and Reasoning is a core area of Artificial Intelligence that deals with:

  • Representing information about the real world in a machine-readable form
  • Reasoning (drawing conclusions) from the represented knowledge

Objectives of KRR

  • Store knowledge efficiently
  • Perform logical reasoning
  • Support intelligent decision-making
Knowledge Representation and Reasoning

KRR in AI Applications

AreaExample
Expert SystemsMedical diagnosis
NLPLanguage understanding
RoboticsEnvironment reasoning
PlanningAutomated scheduling

Propositional Logic

Propositional Logic is the simplest form of logic where statements are either true (T) or false (F).

Basic Elements

ElementDescription
PropositionA declarative statement
Logical ConnectivesAND (∧), OR (∨), NOT (¬), IMPLIES (→)
Truth ValuesTrue or False

Example

  • P: "It is raining"
  • Q: "Road is wet"
  • Expression: P → Q

Truth Table for AND

PQP ∧ Q
TTT
TFF
FTF
FFF

Advantages

  • Simple
  • Easy to implement

Limitations

  • Cannot represent relationships
  • No variables or quantifiers

Predicate Logic

Predicate Logic extends propositional logic by introducing:

  • Predicates
  • Variables
  • Quantifiers

Structure

Predicate(subject)
Example: Human(Socrates)

Quantifiers

QuantifierSymbolMeaning
UniversalFor all
ExistentialThere exists

Example

∀x Human(x) → Mortal(x)

Advantages

  • More expressive than propositional logic

First Order Logic (FOL)

First Order Logic is the most widely used logic in AI.

Components of FOL

ComponentDescription
ConstantsSpecific objects
VariablesGeneral objects
PredicatesProperties/relations
FunctionsMapping objects
Quantifiers∀, ∃

Example

King(John)
∀x King(x) → Person(x)

Inference in First Order Logic

Inference is the process of deriving new knowledge from existing facts.

Common Inference Techniques

TechniqueDescription
Modus PonensIf P → Q and P, then Q
UnificationMatching variables
Forward ChainingData-driven reasoning
Backward ChainingGoal-driven reasoning

Example (Modus Ponens)

P → Q
P
----
Q

Clause Form Conversion

Clause form (or CNF – Conjunctive Normal Form) is required for resolution.

Steps in Clause Form Conversion

  • Eliminate implications
  • Move negation inward
  • Standardize variables
  • Skolemization
  • Drop universal quantifiers
  • Convert to CNF

Example

∀x (Human(x) → Mortal(x))

Converted to:

¬Human(x) ∨ Mortal(x)

Resolution

Resolution is a sound and complete inference method used in FOL.

Resolution Principle

  • Combine two clauses with complementary literals

Example

Clause 1: (A ∨ B)
Clause 2: (¬A ∨ C)
Resolved Clause: (B ∨ C)

Resolution Process Diagram

(A ∨ B)
(¬A ∨ C)
   |
 (B ∨ C)

Advantages

  • Complete inference mechanism
  • Used in automated theorem proving

Comparison Table

Logic TypeExpressivenessComplexity
PropositionalLowLow
PredicateMediumMedium
First OrderHighHigh

Exam-Oriented Notes (MCA)

  • Write syntax and example clearly
  • Draw resolution steps
  • Mention Skolemization steps
  • Compare propositional vs FOL
  • Use tables wherever possible

Chaining, Utility Theory and Probabilistic Reasoning 

Chaining – Concept

Chaining is a reasoning technique used in Artificial Intelligence and Expert Systems to derive conclusions from known facts using a set of rules.

Chaining works on IF–THEN rules:

IF condition THEN action

Purpose of Chaining

  • To infer new facts
  • To reach a goal state logically
  • To support automated decision-making

Types of Chaining

TypeApproach
Forward ChainingData-driven
Backward ChainingGoal-driven

Forward Chaining

Forward chaining starts with known facts and applies rules to infer new facts until the goal is reached.

Working of Forward Chaining

  • Start with initial facts
  • Match facts with rule conditions
  • Fire applicable rules
  • Add new facts to knowledge base
  • Stop when goal is achieved

Example

Rule: IF Fever THEN Infection
Fact: Fever
Conclusion: Infection

Forward Chaining Diagram

Facts → Rules → New Facts → Goal

Applications

  • Expert systems
  • Medical diagnosis
  • Fault detection

Advantages & Limitations

AdvantagesLimitations
Automatic reasoningMay generate unnecessary facts
Suitable for dynamic dataSlower for specific goals

Backward Chaining

Backward chaining starts with a goal and works backward to determine whether known facts support the goal.

Working of Backward Chaining

  • Identify the goal
  • Find rules that conclude the goal
  • Check if rule conditions are satisfied
  • Prove conditions using facts

Example

Goal: Infection
Rule: IF Fever THEN Infection
Check: Is Fever true?

Backward Chaining Diagram

Goal ← Rules ← Facts

Applications

  • Prolog
  • Diagnosis systems
  • Query-based systems

Advantages & Limitations

AdvantagesLimitations
Efficient for specific goalsRequires predefined goals
Less computationNot suitable for all problems

Utility Theory

Utility theory deals with decision-making under uncertainty by assigning numerical values (utilities) to outcomes.

Key Concepts

TermMeaning
UtilityMeasure of satisfaction or preference
Rational AgentMaximizes expected utility
PreferenceRanking of outcomes

Utility Function

U(outcome) → Real Number

Example

  • Choosing job offers based on salary, location, growth

Importance

  • Helps AI agents make optimal decisions

  • Used in economics and game theory

Probabilistic Reasoning

Probabilistic reasoning handles uncertainty using probability theory.

Why Probability?

  • Real-world data is incomplete
  • Information may be noisy
  • Outcomes are uncertain

Basic Probability Rules

RuleFormula
Conditional ProbabilityP(A
Bayes’ TheoremP(A

Hidden Markov Model (HMM)

A Hidden Markov Model is a probabilistic model where:

  • States are hidden
  • Observations are visible

Components of HMM

ComponentDescription
StatesHidden states
ObservationsVisible outputs
Transition ProbabilitiesState change probability
Emission ProbabilitiesObservation likelihood
Initial StateStarting probability

HMM Structure Diagram

State1 → State2 → State3
  ↓        ↓        ↓
 Obs1     Obs2     Obs3

Applications

  • Speech recognition
  • Weather prediction
  • Bioinformatics

Bayesian Networks

Bayesian Networks are graphical models that represent probabilistic relationships among variables.

Structure

  • Nodes → Random variables
  • Edges → Dependencies
  • Directed Acyclic Graph (DAG)

Example

Rain → Traffic → Late

Bayesian Network Diagram

   Rain
     ↓
  Traffic
     ↓
   Late

Advantages

AdvantageDescription
Handles uncertaintyReal-world modeling
ModularEasy to update
Efficient inferenceProbabilistic reasoning

Comparison Table

TechniqueBased OnApplication
Forward ChainingRules & factsExpert systems
Backward ChainingGoalsProlog
Utility TheoryPreferencesDecision-making
HMMHidden statesSpeech, prediction
Bayesian NetworkProbability graphDiagnosis

MCA Exam-Oriented Tips

  • Draw forward & backward chaining diagrams
  • Write Bayes’ theorem formula clearly
  • Mention HMM components explicitly
  • Compare HMM vs Bayesian Network
  • Use examples for utility theory


End of Notes