Bhat, Ajaz A. ORCID: https://orcid.org/0000-0002-6992-8224 and Mohan, Vishwanathan (2018) Goal-directed reasoning and cooperation in robots in shared workspaces: An internal simulation based neural framework. Cognitive Computation, 10 (4). 558–576. ISSN 1866-9956
Preview |
PDF (Published manuscript)
- Published Version
Available under License Creative Commons Attribution. Download (4MB) | Preview |
Abstract
From social dining in households to product assembly in manufacturing lines, goal-directed reasoning and cooperation with other agents in shared workspaces is a ubiquitous aspect of our day-to-day activities. Critical for such behaviours is the ability to spontaneously anticipate what is doable by oneself as well as the interacting partner based on the evolving environmental context and thereby exploit such information to engage in goal-oriented action sequences. In the setting of an industrial task where two robots are jointly assembling objects in a shared workspace, we describe a bioinspired neural architecture for goal-directed action planning based on coupled interactions between multiple internal models, primarily of the robot’s body and its peripersonal space. The internal models (of each robot’s body and peripersonal space) are learnt jointly through a process of sensorimotor exploration and then employed in a range of anticipations related to the feasibility and consequence of potential actions of two industrial robots in the context of a joint goal. The ensuing behaviours are demonstrated in a real-world industrial scenario where two robots are assembling industrial fuse-boxes from multiple constituent objects (fuses, fuse-stands) scattered randomly in their workspace. In a spatially unstructured and temporally evolving assembly scenario, the robots employ reward-based dynamics to plan and anticipate which objects to act on at what time instances so as to successfully complete as many assemblies as possible. The existing spatial setting fundamentally necessitates planning collision-free trajectories and avoiding potential collisions between the robots. Furthermore, an interesting scenario where the assembly goal is not realizable by either of the robots individually but only realizable if they meaningfully cooperate is used to demonstrate the interplay between perception, simulation of multiple internal models and the resulting complementary goal-directed actions of both robots. Finally, the proposed neural framework is benchmarked against a typically engineered solution to evaluate its performance in the assembly task. The framework provides a computational outlook to the emerging results from neurosciences related to the learning and use of body schema and peripersonal space for embodied simulation of action and prediction. While experiments reported here engage the architecture in a complex planning task specifically, the internal model based framework is domain-agnostic facilitating portability to several other tasks and platforms.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | spatial reasoning,planning,cooperation,internal models,body schema,peripersonal space,industrial assembly |
Faculty \ School: | Faculty of Social Sciences > School of Psychology |
UEA Research Groups: | Faculty of Social Sciences > Research Groups > Developmental Science |
Related URLs: | |
Depositing User: | Pure Connector |
Date Deposited: | 25 Apr 2018 11:31 |
Last Modified: | 25 Oct 2024 23:47 |
URI: | https://ueaeprints.uea.ac.uk/id/eprint/66828 |
DOI: | 10.1007/s12559-018-9553-1 |
Downloads
Downloads per month over past year
Actions (login required)
View Item |