Research theme: Audio Signal Processing
Alongside our work in acoustics, we are very interested in audio signal processing, for a variety of application areas:
Spatial Audio: We study audio for augmented and virtual reality, from reproduction and processing (such as spatial room impulse response interpolation and convolution in six degrees-of-freedom) to psychoacoustics (such as the role of source signal similarity in room acoustics position perception).
Machine Learning for Audio Effects: We study how machine learning can be applied to create audio effects processing models, using neural networks and differentiable digital signal processing to create high-quality digital emulations of analog musical hardware. This includes recurrent neural network models, which are a popular approach to creating guitar amplifier plugins.
Aliasing Reduction: One research direction in the past several years has been the development of new techniques for aliasing reduction for nonlinear audio effects. This includes so-called antiderivative antialiasing, developed in conjunction with Julian Parker of Stability AI, and Aalto University.
Virtual Acoustics: Our group has led extensive developments in wave-based simulation for applications in architectural acoustics. Some of this work is rooted in the NESS Project activities from 2012-2016, but continues to the present day, with new work on source and receiver directivity, immersed boundary methods.