A Computer Vision-Inspired Automatic Acoustic Material Tagging System for Virtual Environments

Published in IEEE Conference on Games (CoG), 2020

Recommended citation: Colombo, M., Dolhasz, A. and Harvey, C., 2020, August. A computer vision inspired automatic acoustic material tagging system for virtual environments. In 2020 IEEE Conference on Games (CoG) (pp. 736-739). IEEE. https://www.researchgate.net/publication/344471209_A_Computer_Vision_Inspired_Automatic_Acoustic_Material_Tagging_System_for_Virtual_Environments

This paper presents the ongoing work on an approach to material information retrieval in virtual environments (VEs). Our approach uses convolutional neural networks to classify materials by performing semantic segmentation on images captured in the VE. Class maps obtained are then re-projected onto the environment. We use transfer learning and fine-tune a pre-trained segmentation model on images captured in our VEs. The geometry and semantic information can then be used to create mappings between objects in the VE and acoustic absorption coefficients. This can then be input for physically-based audio renderers, allowing a significant reduction in manual material tagging. Full text here.