In multisensory systems, such as 3D applications and Virtual Reality, spatial cues describing the environment surrounding the user are carried through sound. Human hearing uses auditory stimuli to perceive space and identify entities that cannot be detected by other senses. Hence, in realistic representations of virtual environments (VEs), rendering spatial hearing enhances interaction and immersion. Spatial information of environments can be conveyed to listeners through real-time convolution of impulse responses in the signal processing chain (Vorländer et al, 2014). Siltanen (2005), in addition, proposes an approach that, based on reducing the geometry of the environment, allows real-time acoustic modelling. In virtual reality, presence is also induced by auditory information through spatialised audio (Larsson et al 2010, Stanney et al, 1995). A study is therefore proposed towards improvements of audio processing chains for virtual or augmented reality.
More information on the project, from potential impact to references, can be found on the accompanying PDF.
Duration: 36 Months
Deadline to Apply: 19 January 2020
Only local Hubs members can access this page. Join the community today: https://phdhub.eu/register