Word-Object Learning via Visual Exploration in Space (WOLVES): A neural process account of cross-situational word learning

Bhat, Ajaz ORCID: https://orcid.org/0000-0002-6992-8224, Spencer, John P. ORCID: https://orcid.org/0000-0002-7320-144X and Samuelson, Larissa K. ORCID: https://orcid.org/0000-0002-9141-3286 (2022) Word-Object Learning via Visual Exploration in Space (WOLVES): A neural process account of cross-situational word learning. Psychological Review, 129 (4). 640–695. ISSN 0033-295X

[thumbnail of WOLVES_final_Draft]
Preview
PDF (WOLVES_final_Draft) - Accepted Version
Download (6MB) | Preview

Abstract

Infants, children, and adults have been shown to track co-occurrence across ambiguous naming situations to infer the referents of new words. The extensive literature on this cross-situational word learning (CSWL) ability has produced support for two theoretical accounts—associative learning (AL) and hypothesis testing (HT)—but no comprehensive model of the behavior. We propose Word-Object Learning via Visual Exploration in Space (WOLVES), an implementation-level account of CSWL grounded in real-time psychological processes of memory and attention that explicitly models the dynamics of looking at a moment-to-moment scale and learning across trials. We use WOLVES to capture data from 12 studies of CSWL with adults and children, thereby providing a comprehensive account of data purported to support both AL and HT accounts. Direct model comparison shows that WOLVES performs well relative to two competitor models. In particular, WOLVES captures more data than the competitor models (132 vs. 69 data values) and fits the data better than the competitor models (e.g., lower percent error scores for 12 of 17 conditions). Moreover, WOLVES generalizes more accurately to three “held-out” experiments, although a model by Kachergis et al. (2012) fares better on another metric of generalization (Akaike Information Criterion [AIC]/Bayesian Information Criterion [BIC]). Critically, we offer the first developmental account of CSWL, providing insights into how memory processes change from infancy through adulthood. WOLVES shows that visual exploration and selective attention in CSWL are both dependent on and indicative of learning within a task-specific context. Furthermore, learning is driven by real-time synchrony of words and gaze and constrained by memory processes over multiple timescales.

Item Type: Article
Additional Information: Acknowledgements: This research was supported by HD045713 awarded to Larissa K. Samuelson. The content is solely the responsibility of the authors and does not necessarily represent the official view of the NIH. The authors wish to thank Teodora Gliga for helpful comments on an earlier version of this manuscript and Will Penny for checking the AIC/BIC formulas. We greatly appreciate timely help from George Kachergis, John Trueswell and Charles Yang with details of the implementation of their models. Simulations presented in this paper were carried out on the High Performance Computing Cluster supported by the Research and Specialist Computing Support service at the University of East Anglia.
Uncontrolled Keywords: cross-situational learning,word learning,neural process model,dynamic field theory (dft),attention and memory,4* ,/dk/atira/pure/researchoutput/REFrank/4_
Faculty \ School: Faculty of Social Sciences > School of Psychology
UEA Research Groups: Faculty of Social Sciences > Research Groups > Developmental Science
Faculty of Social Sciences > Research Groups > Cognition, Action and Perception
Depositing User: LivePure Connector
Date Deposited: 24 May 2022 13:48
Last Modified: 21 Nov 2024 03:23
URI: https://ueaeprints.uea.ac.uk/id/eprint/85058
DOI: 10.31219/osf.io/kxycs

Downloads

Downloads per month over past year

Actions (login required)

View Item View Item