Decision-theoretic and game-theoretic approaches to decision making in collectives

Por Matthijs Spaan(ISR/IST)

In robotics research, when developing a robot’s ability to decide on its course of action, we typically design our robot to be rational. People, on the other hand, are well-known to be less-than-perfectly rational in some cases, and this might be a major difference between human societies and artificial ones.  In this talk, we will explore some ways we can instill rational behavior in an artificial system such as a robot.

Rational behavior in robots typically assumes our robot has knowledge about what actions it can perform, and what goals it could achieve. The robot not only has to choose which goal to pursue, but also what actions to take, in order to take it to its selected goal.  In order to make a rational choice, the robot needs have access to quite a large amount of information.  For instance, it needs to know the value of the different goals, that is, how much money, energy, etc, is reaching a particular goal worth to the robot.  This valuation is called the goal’s utility to the robot.  Intuitively speaking, we would expect our robot to choose the goal that has the highest utility for it.  However, there is still a component missing: after choosing the goal, the robot needs to know which actions to take to reach it. In many systems, we cannot predict with full certainty what will happen when the robot executes an action, leading to uncertainty. When deciding about which goal to pursue, the robot should also take into the uncertainty in the path to the goal.  Put simply, if the path to a goal with high utility is very dangerous, it might be better to choose a different goal, for instance one with a lower utility but which can be reached safely. In general, we can say that the robot tries to maximize its expected utility.

When extending this methodology to collectives of interacting robots, several interesting issues arise.  One of them is whether to model individuals in the collective as competing or cooperating with each other, or something in between.  We will examine several ways of modeling collectives by looking at game  theory, in which the players are typically assumed to be competing with each other, as well as at decentralized decision theory.  In the latter methodology, the individuals are fully cooperative, and will try to choose actions in such a way that benefits the collective as whole the most.  While this might not be a realistic way to model human collectives, for artificial collectives that we can design ourselves, it proves a valuable starting point.

Some relevant publications:

Matthijs T. J. Spaan and Francisco S. Melo. Interaction-Driven Markov Games for Decentralized Multiagent Planning under Uncertainty. In Proc. of Int. Joint Conference on Autonomous Agents and Multi Agent Systems, pp. 525–532, 2008. [pdf]

Matthijs T. J. Spaan, Geoffrey J. Gordon, and Nikos Vlassis. Decentralized planning under uncertainty for teams of communicating agents. In Proc. of Int. Joint Conference on Autonomous Agents and Multi Agent Systems, pp. 249–256, 2006. [pdf]

Jelle R. Kok, Matthijs T. J. Spaan, and Nikos Vlassis. Non-communicative multi-robot coordination in dynamic environments. Robotics and Autonomous Systems, 50(2-3):99–114, February 2005 [pdf]

Frans A. Oliehoek, Matthijs T. J. Spaan, and Nikos Vlassis. Optimal and Approximate Q-value Functions for Decentralized POMDPs. Journal of Artificial Intelligence Research, 32:289–353, 2008. [pdf]


matthijsspaanDoutor Matthijs Spaan proferindo a sua conferência.


Seguem-se o vídeo desta conferência e os slides usados na ocasião. A reconstituição da conferência é possível combinando o uso destes dois recursos: mudar os slides manualmente à medida que a palestra avança.

[Regressar à página da Sessão 2]