Human Agent
- Sensors: Eyes, ears, and other organs.
- Actuators: Hands, legs, mouth, and other body parts.
About 1531 wordsAbout 19 min
2025-08-03
An agent is anything that can perceive its environment through sensors and act upon that environment through actuators .
Human Agent
Robotic Agent
Software Agent
An agent operates in a continuous loop: it receives percepts from the environment and performs actions that change the environment.
Agent Function : This is an abstract mathematical mapping from a history of percepts to an action. It's represented as:
f:P∗→A
Where P∗ is the sequence of all percepts, and A is the set of all possible actions.
Agent Program : This is the concrete implementation of the agent function. It's the actual code that runs on the agent's physical architecture.
Agent = Architecture + Program
A rational agent is one that does the "right thing." But what does "right" mean?
For every possible sequence of percepts, a rational agent should choose an action that is expected to maximize its performance measure, given the evidence from the percepts and any built-in knowledge the agent has.
When an agent decides how to act, its choice is guided by four factors:
Rationality vs. Omniscience
Rationality is not the same as omniscience . An omniscient agent knows the actual outcome of its actions and can see the future. A rational agent simply makes the best decision based on what it knows now.
To design a rational agent, we must first specify its task environment. The PEAS framework is used for this purpose.
PRequiredPerformance Measure
How is success measured? What do we want the agent to achieve?
ERequiredEnvironment
Where does the agent operate? What are the "laws of physics" of this world?
ARequiredActuators
What can the agent do? How does it affect the environment?
SRequiredSensors
How does the agent perceive the environment? What information can it gather?
Performance Measure
Environment
Actuators
Sensors
The design of an agent is heavily influenced by the type of environment it operates in.
Agents can be categorized based on the complexity of their decision-making process.
Simple Reflex Agent
Action basis: Current percept only. It ignores the rest of the percept history and operates on simple condition-action rules.
Example: A car that brakes only when the brake lights of the car in front light up.
Level 1
Model-based Reflex Agent
Action basis: Internal state, which models how the world works. This agent maintains an internal model of the world to handle partial observability. It tracks the state of things it can't currently see.
Example: A taxi agent remembering which roads it has already traveled.
Level 2
Goal-based Agent
Action basis: Model-based + explicit goals. This agent considers future outcomes. It asks, "Which of my possible actions will lead me to a goal state?"
Example: A taxi agent formulating a plan (a sequence of turns) to reach a specific destination.
Level 3
Utility-based Agent
Action basis: Model-based + a utility function. When there are multiple paths to a goal, this agent chooses the one that maximizes its "happiness" or utility. Utility is a function that maps a state to a real number representing its desirability.
Example: A taxi agent choosing the route that is not just correct, but is also the quickest and most fuel-efficient.
Level 4
Learning Agent
Action basis: All of the above, plus the ability to improve. A learning agent can operate in unknown environments and become more competent than its initial knowledge allows. It has four main components:
Performance Element: The agent itself (e.g., a utility-based agent).
Critic: Provides feedback on how well the agent is doing.
Learning Element: Uses feedback to modify the performance element.
Problem Generator: Suggests exploratory actions to gain new, informative experiences.
Level 5
87c17-web-deploy(Auto): Update base URL for web-pages branchon Copyright Ownership:WARREN Y.F. LONG
License under:Attribution-NonCommercial-NoDerivatives 4.0 International (CC-BY-NC-ND-4.0)