Utilize este identificador para referenciar este registo: http://hdl.handle.net/10362/30382
Título: Transposing Formal Annotations of 2D Video Data into a 3D Environment for Gesture Research
Autor: Ribeiro, Cláudia
Evola, Vito
Skubisz, Joanna
Anjos, Rafael Kuffner dos
Palavras-chave: gesture research software
visualization techniques
3D annotation
data visualization tool
Unity 3D
Data: 2016
Resumo: Annotating human body movements in videorecordings is at the core of contemporary gesture research, allowing scientists to process video data following customized annotation schemes according to the research questions at hand. With more and more gesture researchers focusing on formal aspects of human movements, the starting point of quali-quantitative analyses is the transcription of the movements using specialized software. Notwithstanding advances in data visualization, visualizing processed data in Gesture Studies (annotations) is currently limited to tables and graphs, which present the data in quantitative and temporal terms for further analyses. Alternative ways of visualizing the data could promote alternative ways of reasoning about the research questions (Tversky 2011). This paper intends to evidence the current void in gesture research tools and present an option for how Gesture scholars can visualize their processed data in a more ”user-friendly” way. Recent efforts to incorporate the advantages of 3D coupled with new visualizations techniques afford new methods to both annotate and analyze body movements using learning algorithms (e.g. Deep 2015) to model virtual characters’ behaviors based on video corpora annotated in software such as ELAN (Brugman & Russel 2004) and ANVIL (Kipp 2012). These advances, nonetheless, are underdeveloped in the area of Gesture Studies research and could provide interesting insights both regarding human and virtual characters interaction and semi-automatic ways of annotating and validating video data (Velloso, Bulling, & Gellersen 2013). We present an example of usage, based on data from [AUTHORS] (2015), where a multiparty scene is transposed from the 2D video data to a modeled 3D environment. Avatars represent participants, and their body parts are labeled according to the formal annotation scheme used (left hand, right arm, torso, etc). Movements of the various articulators of each participant, as they were annotated using ELAN, are programed so their activation is evidenced in the 2D/3D representation of the participants’ annotation. This recreates the scene of interest, allowing a more schematic visualization compared to the original video recording, isolating and foregrounding only the focal elements and eliminating visual ”noise”. Moreover, gaze annotations are visualized: unlike in the video, where gaze can only be tracked one participant at a time, this tool allows multiparty gaze annotations to be viewed synoptically as vectors, allowing the researcher to track the group’s gaze-points simultaneously. As a computational model of annotations, statistical reports will also be available and may contributes to the reduction of incoherencies between human raters, and thus to higher value of inter-rater agreement and data reliability. A work-in-progress, this proof-of-concept prototype intends to be made available to researchers interested in visualizing formal gesture annotations with minimal setup for their own quali- quantitative research on formal aspects of body movements.
Descrição: UID/LIN/03213/2013
Peer review: yes
URI: https://www.academia.edu/34062183/Transposing_Formal_Annotations_of_2D_Video_Data_into_a_3D_Environment_for_Gesture_Research
Aparece nas colecções:FCSH: CLUNL - Documentos de conferências internacionais

Ficheiros deste registo:
Ficheiro Descrição TamanhoFormato 
ISGS_BoA_Ribeiro_Evola_Skub_Anjos_44_44.pdf72,9 kBAdobe PDFVer/Abrir


FacebookTwitterDeliciousLinkedInDiggGoogle BookmarksMySpace
Formato BibTex MendeleyEndnote 

Todos os registos no repositório estão protegidos por leis de copyright, com todos os direitos reservados.