Dopamine, reward learning, and active inference

FitzGerald, Thomas H B, Dolan, Raymond J and Friston, Karl (2015) Dopamine, reward learning, and active inference. Frontiers in Computational Neuroscience, 9. ISSN 1662-5188

[img]
Preview
PDF (fncom-09-00136) - Published Version
Available under License Creative Commons Attribution.

Download (3MB) | Preview

Abstract

Temporal difference learning models propose phasic dopamine signaling encodes reward prediction errors that drive learning. This is supported by studies where optogenetic stimulation of dopamine neurons can stand in lieu of actual reward. Nevertheless, a large body of data also shows that dopamine is not necessary for learning, and that dopamine depletion primarily affects task performance. We offer a resolution to this paradox based on an hypothesis that dopamine encodes the precision of beliefs about alternative actions, and thus controls the outcome-sensitivity of behavior. We extend an active inference scheme for solving Markov decision processes to include learning, and show that simulated dopamine dynamics strongly resemble those actually observed during instrumental conditioning. Furthermore, simulated dopamine depletion impairs performance but spares learning, while simulated excitation of dopamine neurons drives reward learning, through aberrant inference about outcome states. Our formal approach provides a novel and parsimonious reconciliation of apparently divergent experimental findings.

Item Type: Article
Uncontrolled Keywords: reward,reward learning,variational inference,dopamine,active interference,instrumental conditioning,incentive salience,learning
Faculty \ School: Faculty of Social Sciences > School of Psychology
Depositing User: Pure Connector
Date Deposited: 15 Apr 2016 15:00
Last Modified: 20 Nov 2020 00:39
URI: https://ueaeprints.uea.ac.uk/id/eprint/58286
DOI: 10.3389/fncom.2015.00136

Actions (login required)

View Item View Item