Jump to content

Inverse planning

From Wikipedia, the free encyclopedia
(Redirected from Inverse Planning)

Inverse Planning refers to the process of inferring an agent's mental states, such as goals, beliefs, emotions, etc., from actions by assuming agents are rational planners.[1][2] It is a method commonly used in computational cognitive science and artificial intelligence for modeling agents' Theory of mind.

Inverse planning is closely related to Inverse Reinforcement Learning, which attempts to learn a reward function based on agents' behavior, and plan recognition, which finds logically-consistent goals given the action observations.

Bayesian inverse planning

[edit]
A causal Diagram of agent's goal and actions

Inverse Planning is often framed with a Bayesian formulation, such as sequential Monte Carlo methods. The inference process can be represented with a graphical model shown on the right. In this causal diagram, a rational agent with a goal g produces a plan with a sequence of actions , where

In the forward planning model, it is often assumed that the agent is rational. The agents' actions can then be derived from a Boltzmann rational action distribution,

where is the cost of the optimal plan to goal by first performing action , and is the Boltzmann temperature parameter.

Then giving action observations of , Inverse Planning applies Bayes rule to invert the conditional probability to find the posterior probability of the agent's goal.

Inverse planning can also be applied for inferring agent's beliefs, emotions, preferences, etc. Recent work in Bayesian Inverse Planning has also been able to account for boundedly rational agent behavior, multi-modal interactions, and team actions in multi-agent systems.[3][4][5]

Application

[edit]

Inverse Planning has been widely used in modeling agent's behavior in cognitive science to understand human's ability to interpret and infer other agents' latent mental states.[1][2][6] It has increasingly been applied in Human-AI and Human-Robot interactions, allowing artificial agents to recognize the goals and beliefs of human users in order to provide assistance.[7][8][9]

References

[edit]
  1. ^ a b Baker, Chris L.; Saxe, Rebecca; Tenenbaum, Joshua B. (December 2009). "Action understanding as inverse planning". Cognition. 113 (3): 329–349. doi:10.1016/j.cognition.2009.07.005. ISSN 0010-0277. PMID 19729154.
  2. ^ a b Baker, Chris L.; Tenenbaum, J. B.; Saxe, Rebecca R. (2007). "Goal Inference as Inverse Planning". Proceedings of the Annual Meeting of the Cognitive Science Society. 29 (29).
  3. ^ Ying, Lance; Zhi-Xuan, Tan; Mansinghka, Vikash; Tenenbaum, Joshua B. (2023). "Inferring the Goals of Communicating Agents from Actions and Instructions". Proceedings of the AAAI Symposium Series. 2 (1): 26–33. arXiv:2306.16207. doi:10.1609/aaaiss.v2i1.27645. ISSN 2994-4317.
  4. ^ Zhi-Xuan, Tan; Mann, Jordyn L.; Silver, Tom; Tenenbaum, Joshua B.; Mansinghka, Vikash K. (2020-12-06). "Online Bayesian goal inference for boundedly-rational planning agents". Proceedings of the 34th International Conference on Neural Information Processing Systems. NIPS'20. Red Hook, NY, USA: Curran Associates Inc.: 19238–19250. ISBN 978-1-7138-2954-6. S2CID 219687443.
  5. ^ Shum, Michael; Kleiman-Weiner, Max; Littman, Michael L.; Tenenbaum, Joshua B. (2019-07-17). "Theory of Minds: Understanding Behavior in Groups through Inverse Planning". Proceedings of the AAAI Conference on Artificial Intelligence. 33 (1): 6163–6170. arXiv:1901.06085. doi:10.1609/aaai.v33i01.33016163. ISSN 2374-3468.
  6. ^ Baker, Chris L.; Jara-Ettinger, Julian; Saxe, Rebecca; Tenenbaum, Joshua B. (2017-03-13). "Rational quantitative attribution of beliefs, desires and percepts in human mentalizing". Nature Human Behaviour. 1 (4): 1–10. doi:10.1038/s41562-017-0064. ISSN 2397-3374.
  7. ^ Puig, Xavier; Shu, Tianmin; Tenenbaum, Joshua B.; Torralba, Antonio (2023-05-29). "NOPA: Neurally-guided Online Probabilistic Assistance for Building Socially Intelligent Home Assistants". 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE. pp. 7628–7634. arXiv:2301.05223. doi:10.1109/ICRA48891.2023.10161352. ISBN 979-8-3503-2365-8.
  8. ^ Zhi-Xuan, Tan; Ying, Lance; Mansinghka, Vikash; Tenenbaum, Joshua B. (2024-02-27), Pragmatic Instruction Following and Goal Assistance via Cooperative Language-Guided Inverse Planning, arXiv:2402.17930
  9. ^ Wu, Sarah A.; Wang, Rose E.; Evans, James A.; Tenenbaum, Joshua B.; Parkes, David C.; Kleiman-Weiner, Max (2021-04-07). "Too Many Cooks: Bayesian Inference for Coordinating Multi-Agent Collaboration". Topics in Cognitive Science. 13 (2): 414–432. arXiv:2003.11778. doi:10.1111/tops.12525. ISSN 1756-8757. PMID 33829670.