Home | Legals | Data Protection | KIT

Intelligent Agents and Decision Theory

Intelligent Agents and Decision Theory
type: Vorlesung (V)
semester: SS 2020
time: 2020-04-23
09:45 - 11:15 wöchentlich
30.28 Seminarraum 4 (R004)
30.28 Lernzentrum Wolfgang-Gaede-Str. 6


2020-04-30
09:45 - 11:15 wöchentlich
30.28 Seminarraum 4 (R004)
30.28 Lernzentrum Wolfgang-Gaede-Str. 6

2020-05-07
09:45 - 11:15 wöchentlich
30.28 Seminarraum 4 (R004)
30.28 Lernzentrum Wolfgang-Gaede-Str. 6

2020-05-14
09:45 - 11:15 wöchentlich
30.28 Seminarraum 4 (R004)
30.28 Lernzentrum Wolfgang-Gaede-Str. 6

2020-05-28
09:45 - 11:15 wöchentlich
30.28 Seminarraum 4 (R004)
30.28 Lernzentrum Wolfgang-Gaede-Str. 6

2020-06-04
09:45 - 11:15 wöchentlich
30.28 Seminarraum 4 (R004)
30.28 Lernzentrum Wolfgang-Gaede-Str. 6

2020-06-18
09:45 - 11:15 wöchentlich
30.28 Seminarraum 4 (R004)
30.28 Lernzentrum Wolfgang-Gaede-Str. 6

2020-06-25
09:45 - 11:15 wöchentlich
30.28 Seminarraum 4 (R004)
30.28 Lernzentrum Wolfgang-Gaede-Str. 6

2020-07-02
09:45 - 11:15 wöchentlich
30.28 Seminarraum 4 (R004)
30.28 Lernzentrum Wolfgang-Gaede-Str. 6

2020-07-09
09:45 - 11:15 wöchentlich
30.28 Seminarraum 4 (R004)
30.28 Lernzentrum Wolfgang-Gaede-Str. 6

2020-07-16
09:45 - 11:15 wöchentlich
30.28 Seminarraum 4 (R004)
30.28 Lernzentrum Wolfgang-Gaede-Str. 6

2020-07-23
09:45 - 11:15 wöchentlich
30.28 Seminarraum 4 (R004)
30.28 Lernzentrum Wolfgang-Gaede-Str. 6


lecturer: Prof. Dr. Andreas Geyer-Schulz
lv-no.: <a target="lvn" href="https://campus.studium.kit.edu/events/sgsCece8RsGO_XNYob4Tqg">2540537</a>
Notes

The key assumption of this lecture is that the concept of artificial intelligence is inseparably linked to the economic concept of rationality of agents. We consider different classes of decision problems - decisions under certainty, risk and uncertainty - from an economic, managerial and AI-engineering perspective:

From an economic point of view, we analyze how to act rationally in these situations based on classic utility theory. In this regard, the course also introduces the relevant parts of decision theory for dealing with

  • multiple conflicting objectives,
  • incomplete, risky and uncertain information about the world,
  • assessing utility functions, and
  • quantifying the value of information ...

From an engineering perspective, we discuss how to develop practical solutions for these decision problems, using appropriate AI components. We introduce

  • a general, agent-based design framework for AI systems,

as well as AI methods from the fields of

  • search (for decisions under certainty),
  • inference (for decions under risk) and
  • learning (for decisions under uncertainty).

Where applicable, the course highlights the theoretical ties of these methods with decision theory.

We conclude with a discussion of ethical and philosophical issues concerning the development and use of AI.

Learning objectives

Students are able to design, analyze, implement, and evaluate intelligent agents.

Lecture Outline

  1. Introduction: Artificial intelligence and the economic concept of rationality
  2. Intelligent Agents: A general, agent-based design framework for AI systems
  3. Decision under certainty: Assessing utility functions for decisions with multiple objectives
  4. Search: Linear programming for decisions under certainty
  5. Decisions under risk: The expected utility principle
  6. Information systems: Improving economic decisions under risk
  7. Inference: Bayesian networks for decisions under risk
  8. Information Learning objectives value: When should an agent gather new information?
  9. Decisions under uncertainty: Complete lack of information
  10. Learning: Statistical learning of bayesian networks
  11. Learning: Supervised learning with neural networks
  12. Learning: Reinforcement learning
  13. Learning: Preference-based reinforcement learning
  14. Discussion: Ethical and philosophical issues

Note: This rough outline may be subject to change.

Bibliography

Basic literature (by lecture):

  1. Russell & Norvig (2016, chapter 1), Bamberg et al. (2019, chapters 1 & 2)
  2. Russell & Norvig (2016, chapter 2)
  3. Keeney & Raiffa (1993, chapter 3)
  4. Nickel et al. Chap 1 (German), Russell & Norvig (2016, chapter 3)
  5. Bamberg et al. (2019, chapter 4), Fishburn (1988)
  6. Bamberg et al. (2019, chapter 6)
  7. Russell & Norvig (2016, chapters 13, 14, 16)
  8. Russell & Norvig (2016, chapter 16), Bamberg et al. (2019, chapter 6)
  9. Bamberg et al. (2019, chapter 5)
  10. Russell & Norvig (2016, chapter 20)
  11. Goodfellow et al. (2016, chapter 6)
  12. Sutton & Barto (2018, chapter 3)
  13. Wirth et al. (2017)
  14. Russell & Norvig (2016, chapter 26)

Detailed references:

Bamberg, Coenenberg & Krapp (2019). Betriebswirtschaftliche Entscheidungslehre (16th ed.). Verlag Franz Vahlen GmbH.

Fishburn (1988). Nonlinear preference and utility theory. Baltimore: Johns Hopkins University Press.

Goodfellow, Bengio & Courville (2016). Deep learning. Cambridge: MIT press.

Keeney & Raiffa (1993). Decisions with multiple objectives: preferences and value trade-offs. Cambridge University Press.

Russell & Norvig (2016). Artificial Intelligence: A Modern Approach (3rd Global Edition). Pearson.

Sutton & Barto (2018). Reinforcement learning: An introduction. Cambridge: MIT press.

Wirth, Akrour, Neumann & Fürnkranz (2017). A Survey of Preference-Based Reinforcement Learning Methods. Journal of Machine Learning Research, 18(1), 1–46.