Efficient perceptually-motivated diffraction modelling for VR/AR and computer games
Start date: 2022
End date: 2025
Physically accurate audio within video games and virtual reality improves the sense of immersion for the end user. For a truly real time solution that can adapt to an evolving scene, approximations must be made by assuming sound propagates as a ray. Modelling diffraction (how sound ‘bends’ around obstacles) is a challenging aspect of these models and has only been implemented successfully in the last couple of years.
There has been very limited research on the perceivability of edge diffraction. This project proposes using parametrically optimised filters and perceptually-motivated path culling for a more efficient method of modelling diffraction in real time that is perceptually similar to less efficient methods.
This project aims to improve the efficiency of diffraction modelling by considering two aspects of diffraction: (a) efficiently calculating the response of a given diffracting edge; and (b) reducing the number of diffraction paths considered within a scene.
The first section takes advantage of the computational efficiency of infinite impulse response (IIR) filters. Parameter optimisation methods will be used to design IIR filters that approximate the frequency response of a diffracting edge given by existing physically accurate models such as the Biot-Tolstoy-Medwin (BTM) model. This will be validated perceptually through ABX listening tests.
The second section investigates which diffraction paths are most perceptually significant. Reducing the number paths to be considered improves computational efficiency. This step has been included in existing systems; however there have been very limited perceptual evaluations on the topic. Methods of mesh simplification, level of detail for scattering geometries and diffraction path culling to reduce the number of diffraction paths will be evaluated using up-down methods to find relevant just-noticeable differences (JNDs).