Acoustic Classification and Optimization for Multi-Modal Rendering of Real-World Scenes

2018 
We present a novel algorithm to generate virtual acoustic effects in captured 3D models of real-world scenes for multimodal augmented reality. We leverage recent advances in 3D scene reconstruction in order to automatically compute acoustic material properties. Our technique consists of a two-step procedure that first applies a convolutional neural network (CNN) to estimate the acoustic material properties, including frequency-dependent absorption coefficients, that are used for interactive sound propagation. In the second step, an iterative optimization algorithm is used to adjust the materials determined by the CNN until a virtual acoustic simulation converges to measured acoustic impulse responses. We have applied our algorithm to many reconstructed real-world indoor scenes and evaluated its fidelity for augmented reality applications.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    42
    References
    27
    Citations
    NaN
    KQI
    []