Foundations of Agents
Full course description
Agents are autonomous computer programs, robots, humans, etc. Agents operate in some environment, which they can observe, and in which they can realize objectives through the execution of actions. Examples of environment in which agents can operate, are computer game environments, the internet, and also the physical world is case of robots and humans.
In this course we address the problem of how an agent can act optimally in order to realize its objectives. We will answer this question by investigating how we can formally specify the agent’s environment, the agent’s objectives, the observations the agent can make and the actions it can execute. We use the formal model to investigate how the agent can determine an (optimal) behaviour realizing its objectives.
The following formal models will be investigated:
- Markov Decision Processes,
- Partially Observable Markov Decision Process,
- logic-based models such as Epistemic Logic, Doxastic Logic, Dynamic Logic, and BDI logics, and
- Game Theory.
Some examples of methods for determining the agents optimal behaviour addressed in the course are: Value and Policy Iteration, Q-Learning, Planning, etc.
A basic course in logic and in probability theory.