Learning to Model Aspects of Hearing Perception Using Neural Loss Functions
We present a framework to model the perceived quality of audio signals by combining convolutional architectures, with ideas from classical signal processing, and describe an approach to enhancing perceived acoustical quality. We demonstrate the approach by transforming the sound of an inexpensive musical with degraded sound quality to that of a high-quality musical instrument without the need for parallel data which is often hard to collect. We adapt the classical approach of a simple adaptive EQ filtering to the objective criterion learned by a neural architecture and optimize it to get the signal of our interest. Since we learn adaptive masks depending on the signal of interest as opposed to a fixed transformation for all the inputs, we show that shallow neural architectures can achieve the desired result. A simple constraint on the objective and the initialization helps us in avoiding adversarial examples, which otherwise would have produced noisy, unintelligible audio. We believe that the current framework proposed has enormous applications, in a variety of problems where one can learn a loss function depending on the problem, using a neural architecture and optimize it after it has been learned.
READ FULL TEXT