Reinforcement learning under uncertainty: expected versus unexpected uncertainty and state versus reward uncertainty

Ez-zizi, Adnane, Farrell, Simon, Leslie, David, Malhotra, Gaurav and Ludwig, Casimir (2023) Reinforcement learning under uncertainty: expected versus unexpected uncertainty and state versus reward uncertainty. Computational Brain and Behavior, 6. pp. 626-650. ISSN 2522-0861

[thumbnail of RL_under_uncertainty_Manuscript (Ez-zizi et al).pdf] Text
RL_under_uncertainty_Manuscript (Ez-zizi et al).pdf - Accepted Version
Restricted to Repository staff only

Download (5MB) | Request a copy
[thumbnail of Reinforcement Learning Under Uncertainty---.pdf]
Preview
Text
Reinforcement Learning Under Uncertainty---.pdf - Published Version
Available under License Creative Commons Attribution.

Download (2MB) | Preview

Abstract

Two prominent types of uncertainty that have been studied extensively are expected and unexpected uncertainty. Studies suggest that humans are capable of learning from reward under both expected and unexpected uncertainty when the source of variability is the reward. How do people learn when the source of uncertainty is the environment's state and rewards themselves are deterministic? How does their learning compare with the case of reward uncertainty? The present study addressed these questions using behavioural experimentation and computational modelling. Experiment 1 showed that human subjects were generally able to use reward feedback to successfully learn the task rules under state uncertainty, and were able to detect a non-signalled reversal of stimulus-response contingencies. Experiment 2, which combined all four types of uncertainties—expected versus unexpected uncertainty, and state versus reward uncertainty—highlighted key similarities and differences in learning between state and reward uncertainties. We found that subjects performed significantly better in the state uncertainty condition, primarily because they explored less and improved their state disambiguation. We also show that a simple reinforcement learning mechanism that ignores state uncertainty and updates the state-action value of only the identified state accounted for the behavioural data better than both a Bayesian reinforcement learning model that keeps track of belief states and a model that acts based on sampling from past experiences. Our findings suggest a common mechanism supports reward-based learning under state and reward uncertainty.

Item Type: Article
Uncontrolled Keywords: expected and unexpected uncertainty, reinforcement learning, Bayesian reinforcement learning, sampling-based learning
Subjects: B Philosophy. Psychology. Religion > BF Psychology
Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Faculty of Health & Science > Department of Science & Technology
Depositing User: Adnane Ez-zizi
Date Deposited: 10 Jan 2023 14:43
Last Modified: 17 Sep 2024 13:34
URI: https://oars.uos.ac.uk/id/eprint/2886

Actions (login required)

View Item
View Item