Logo do repositório
 
Publicação

Multimodal emotion classification using machine learning in immersive and non-immersive virtual reality

dc.contributor.authorLima, Rodrigo
dc.contributor.authorChirico, Alice
dc.contributor.authorVarandas, Rui
dc.contributor.authorGamboa, Hugo
dc.contributor.authorGaggioli, Andrea
dc.contributor.authori Badia, Sergi Bermúdez
dc.contributor.institutionLIBPhys-UNL
dc.contributor.pblSpringer Science and Business Media Deutschland GmbH
dc.date.accessioned2024-09-26T22:28:11Z
dc.date.available2024-09-26T22:28:11Z
dc.date.issued2024-06
dc.descriptionFunding Information: Open access funding provided by FCT|FCCN (b-on). This work was funded by the FCT\u2014Funda\u00E7\u00E3o para a Ci\u00EAncia e Tecnologia, through the Ph.D. Grants 2020.06024.BD and PD/BDE/150304/2019, supported by the NOVA Laboratory of Computer Science and Informatics (UIDB/04516/2020) and BRaNT project (PTDC/CCI-COM/30990/2017), by the ARDITI - Ag\u00EAncia Regional para o Desenvolvimento da Investiga\u00E7\u00E3o, Tecnologia e Inova\u00E7\u00E3o, through the project MACBIOIDI2 (MAC2/1.1b/352), and by PLUX Wireless Biosignals, S.A. Publisher Copyright: © The Author(s) 2024.
dc.description.abstractAffective computing has been widely used to detect and recognize emotional states. The main goal of this study was to detect emotional states using machine learning algorithms automatically. The experimental procedure involved eliciting emotional states using film clips in an immersive and non-immersive virtual reality setup. The participants’ physiological signals were recorded and analyzed to train machine learning models to recognize users’ emotional states. Furthermore, two subjective ratings emotional scales were provided to rate each emotional film clip. Results showed no significant differences between presenting the stimuli in the two degrees of immersion. Regarding emotion classification, it emerged that for both physiological signals and subjective ratings, user-dependent models have a better performance when compared to user-independent models. We obtained an average accuracy of 69.29 ± 11.41% and 71.00 ± 7.95% for the subjective ratings and physiological signals, respectively. On the other hand, using user-independent models, the accuracy we obtained was 54.0 ± 17.2% and 24.9 ± 4.0%, respectively. We interpreted these data as the result of high inter-subject variability among participants, suggesting the need for user-dependent classification models. In future works, we intend to develop new classification algorithms and transfer them to real-time implementation. This will make it possible to adapt to a virtual reality environment in real-time, according to the user’s emotional state.en
dc.description.versionpublishersversion
dc.description.versionpublished
dc.format.extent23
dc.format.extent2770326
dc.identifier.doi10.1007/s10055-024-00989-y
dc.identifier.issn1359-4338
dc.identifier.otherPURE: 99834304
dc.identifier.otherPURE UUID: 062aedb6-dbc9-4678-9588-6f69b79a187b
dc.identifier.otherScopus: 85192177246
dc.identifier.otherWOS: 001214779600001
dc.identifier.otherORCID: /0000-0002-4022-7424/work/168380621
dc.identifier.urihttp://hdl.handle.net/10362/172496
dc.identifier.urlhttps://www.scopus.com/pages/publications/85192177246
dc.language.isoeng
dc.peerreviewedyes
dc.subjectAffective computing
dc.subjectEmotions
dc.subjectMachine learning
dc.subjectPhysiological signals
dc.subjectVirtual reality
dc.subjectWearables
dc.subjectSoftware
dc.subjectHuman-Computer Interaction
dc.subjectComputer Graphics and Computer-Aided Design
dc.titleMultimodal emotion classification using machine learning in immersive and non-immersive virtual realityen
dc.typejournal article
degois.publication.issue2
degois.publication.titleVirtual Reality
degois.publication.volume28
dspace.entity.typePublication
rcaap.rightsopenAccess

Ficheiros

Principais
A mostrar 1 - 1 de 1
A carregar...
Miniatura
Nome:
Multimodal_emotion_classifcation_using_machine_learning.pdf
Tamanho:
2.64 MB
Formato:
Adobe Portable Document Format