Dr Alexandra Covaci
I am a researcher in the field of virtual reality, currently Lecturer in Digital Arts and Technology at the University of Kent. My research interests are centred around the transformative power of virtual reality from training to social scenarios.
My research activities lie at the confluence of virtual reality, multisensory media, human computer interaction and psychology. Specific topics of interest include:
- Understanding what makes virtual reality work and how we can use it for changing the self
- Creating a systematic understanding of multisensory experiences for interactive technologies
- Perceptual media quality
- Collaborative virtual environments for data visualisation and manipulation
My approach is driven by a mix of human factor studies and creativity.
My main research area is in on the interface between computer science and neuroscience, where I design and develop environments focused on skills training (from sport abilities to applications meant to help people with cognitive disabilities in performing different tasks). Recently, I started to look into what happens when we go beyond audio-visual interfaces. In this context, I became interested in the design of meaningful multisensory experiences by exploiting different combinations of sensory modalities. I particularly enjoy working with systems that encompass visual, auditory, haptic and olfactory feedback.
Undergraduate – Introduction to Virtual Reality EL681
This undergraduate module is meant to introduce students to the main principles and technologies behind virtual reality. Students will learn to produce interactive experiences in virtual environments using mobile and desktop virtual reality headsets.
I am interested in working with bright and motivated students in a variety of areas:
- Social scenarios in virtual reality
- Interactive environments
- Interaction and perception in augmented reality
- Human computer interaction
- Virtual reality for visualisations in of complex concepts (e.g., architecture, pharmacology)
Conference or workshop item
Zou, L. et al. (2017). Can Multisensorial Media Improve Learner Experience? in: 8th ACM on Multimedia Systems Conference. New York: ACM, pp. 315-320. Available at: http://dx.doi.org/10.1145/3083187.3084014.In recent years, the emerging immersive technologies (e.g. Virtual/Augmented Reality, multisensorial media) bring brand-new multi-dimensional effects such as 3D vision, immersion, vibration,
smell, airflow, etc. to gaming, video entertainment and other aspects of human life. This paper reports results from an European Horizon 2020 research project on the impact of multisensoral media (mulsemedia) on educational learner experience. A mulsemediaenhanced test-bed was developed to perform delivery of video content enhanced with haptic, olfaction and airflow effects. The results of the quality rating and questionnaires show significant improvements in terms of mulsemedia-enhanced teaching.
Covaci, A., Olivier, A. and Multon, F. (2014). Third person view and guidance for more natural motor behaviour in immersive basketball playing. in: VRST 2014: 20th ACM Symposium on Virtual Reality Software and Technology. ACM, pp. 55-64. Available at: http://dx.doi.org/10.1145/2671015.2671023.The use of Virtual Reality (VR) in sports training is now widely studied with the perspective to transfer motor skills learned in vir- tual environments (VEs) to real practice. However precision mo- tor tasks that require high accuracy have been rarely studied in the context of VE, especially in Large Screen Image Display (LSID) platforms. An example of such a motor task is the basketball free throw, where the player has to throw a ball in a 46cm wide basket placed at 4.2m away from her. In order to determine the best VE training conditions for this type of skill, we proposed and compared three training paradigms. These training conditions were used to compare the combinations of different user perspectives: first (1PP) and third-person (3PP) perspectives, and the effectiveness of visual guidance. We analysed the performance of eleven amateur sub- jects who performed series of free throws in a real and immersive 1:1 scale environment under the proposed conditions. The results show that ball speed at the moment of the release in 1PP was sig- nificantly lower compared to real world, supporting the hypothe- sis that distance is underestimated in large screen VEs. However ball speed in 3PP condition was more similar to the real condition, especially if combined with guidance feedback. Moreover, when guidance information was proposed, the subjects released the ball at higher - and closer to optimal - position (5-7% higher compared to no-guidance conditions). This type of information contributes to better understand the impact of visual feedback on the motor perfor- mance of users who wish to train motor skills using immersive en- vironments. Moreover, this information can be used by exergames designers who wish to develop coaching systems to transfer motor skills learned in VEs to real practice.