MMLI: Multimodal Multiperson Corpus of Laughter in Interaction

Niewiadomski, Radoslaw, Mancini, Maurizio, Baur, Tobias, Varni, Giovanna, Griffin, Harry and Aung, Min S. H. (2013) MMLI: Multimodal Multiperson Corpus of Laughter in Interaction. In: Human Behavior Understanding. Human Behavior Understanding . Springer, pp. 184-195. ISBN 978-3-319-02713-5

Full text not available from this repository. (Request a copy)


The aim of the Multimodal and Multiperson Corpus of Laughter in Interaction (MMLI) was to collect multimodal data of laughter with the focus on full body movements and different laughter types. It contains both induced and interactive laughs from human triads. In total we collected 500 laugh episodes of 16 participants. The data consists of 3D body position information, facial tracking, multiple audio and video channels as well as physiological data. In this paper we discuss methodological and technical issues related to this data collection including techniques for laughter elicitation and synchronization between different independent sources of data. We also present the enhanced visualization and segmentation tool used to segment captured data. Finally we present data annotation as well as preliminary results of the analysis of the nonverbal behavior patterns in laughter.

Item Type: Book Section
Faculty \ School: Faculty of Science > School of Computing Sciences
Related URLs:
Depositing User: LivePure Connector
Date Deposited: 26 Sep 2019 08:30
Last Modified: 02 Nov 2022 12:30
DOI: 10.1007/978-3-319-02714-2_16

Actions (login required)

View Item View Item