High-resolution Piano Transcription with Pedals by Regressing Onsets and Offsets Times

10/05/2020
by   Qiuqiang Kong, et al.
2

Automatic music transcription (AMT) is the task of transcribing audio recordings into symbolic representations such as Musical Instrument Digital Interface (MIDI). Recently, neural networks based methods have been applied to AMT, and have achieved state-of-the-art result. However, most of previous AMT systems predict the presence or absence of notes in the frames of audio recordings. The transcription resolution of those systems are limited to the hop size time between adjacent frames. In addition, previous AMT systems are sensitive to the misaligned onsets and offsets labels of audio recordings. For high-resolution evaluation, previous works have not investigated AMT systems evaluated with different onsets and offsets tolerances. For piano transcription, there is a lack of research on building AMT systems with both note and pedal transcription. In this article, we propose a high-resolution AMT system trained by regressing precise times of onsets and offsets. In inference, we propose an algorithm to analytically calculate the precise onsets and offsets times of note and pedal events. We build both note and pedal transcription systems with our high-resolution AMT system. We show that our AMT system is robust to misaligned onsets and offsets labels compared to previous systems. Our proposed system achieves an onset F1 of 96.72 dataset, outperforming the onsets and frames system from Google of 94.80 system achieves a pedal onset F1 score of 91.86 result on the MAESTRO dataset. We release the source code of our work at https://github.com/bytedance/piano_transcription.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset