Linguistic modelling and language-processing technologies for Avatar-based sign language presentation

Elliott, R., Glauert, JRW, Kennaway, JR, Marshall, I and Safar, E (2008) Linguistic modelling and language-processing technologies for Avatar-based sign language presentation. Universal Access in the Information Society, 6 (4). pp. 375-391. ISSN 1615-5289

Full text not available from this repository. (Request a copy)

Abstract

Sign languages are the native languages for many pre-lingually deaf people and must be treated as genuine natural languages worthy of academic study in their own right. For such pre-lingually deaf, whose familiarity with their local spoken language is that of a second language learner, written text is much less useful than is commonly thought. This paper presents research into sign language generation from English text at the University of East Anglia that has involved sign language grammar development to support synthesis and visual realisation of sign language by a virtual human avatar. One strand of research in the ViSiCAST and eSIGN projects has concentrated on the generation in real time of sign language performance by a virtual human (avatar) given a phonetic-level description of the required sign sequence. A second strand has explored generation of such a phonetic description from English text. The utility of the conducted research is illustrated in the context of sign language synthesis by a preliminary consideration of plurality and placement within a grammar for British Sign Language (BSL). Finally, ways in which the animation generation subsystem has been used to develop signed content on public sector Web sites are also illustrated.

Item Type: Article
Faculty \ School: Faculty of Science > School of Computing Sciences
Related URLs:
Depositing User: Rhiannon Harvey
Date Deposited: 22 Feb 2012 11:31
Last Modified: 21 Apr 2020 17:55
URI: https://ueaeprints.uea.ac.uk/id/eprint/37358
DOI: 10.1007/s10209-007-0102-z

Actions (login required)

View Item View Item