Abstract
Automatic music transcription (AMT) is one of the most challenging tasks in the music information retrieval domain. It is the process of converting an audio recording of music into a symbolic representation containing information about the notes, chords, and rhythm. Current research in this domain focuses on developing new models based on transformer architecture or using methods to perform semi-supervised training, which gives outstanding results, but the computational cost of training such models is enormous. This work shows how to employ easily generated synthesized audio data produced by software synthesizers to train a universal model. It is a good base for further transfer learning to quickly adapt transcription model for other instruments. Achieved results prove that using synthesized data for training may be a good base for pretraining general-purpose models, where the task of transcription is not focused on one instrument.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Allen, J.: Short term spectral analysis, synthesis, and modification by discrete fourier transform. IEEE Trans. Acoust. Speech Signal Process. 25(3), 235–238 (1977)
Benetos, E., Dixon, S., Duan, Z., Ewert, S.: Automatic music transcription: An overview. IEEE Signal Process. Mag. 36(1), 20–30 (2018)
Brown, J.C.: Calculation of a constant q spectral transform. J. Acoustical Soc. Am. 89(1), 425–434 (1991)
Cheuk, K.W., Agres, K., Herremans, D.: The impact of audio input representations on neural network based music transcription. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–6. IEEE (2020)
Cheuk, K.W., Anderson, H., Agres, K., Herremans, D.: nnaudio: An on-the-fly gpu audio to spectrogram conversion toolbox using 1d convolutional neural networks. IEEE Access 8, 161981–162003 (2020)
Cheuk, K.W., Herremans, D., Su, L.: Reconvat: A semi-supervised automatic music transcription framework for low-resource real-world data. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 3918–3926 (2021)
Cheuk, K.W., Luo, Y.J., Benetos, E., Herremans, D.: The effect of spectrogram reconstruction on automatic music transcription: An alternative approach to improve transcription accuracy. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 9091–9098. IEEE (2021)
Emiya, V., Bertin, N., David, B., Badeau, R.: Maps-a piano database for multipitch estimation and automatic transcription of music (2010)
Gardner, J., Simon, I., Manilow, E., Hawthorne, C., Engel, J.: Mt3: Multi-task multitrack music transcription. arXiv preprint arXiv:2111.03017 (2021)
Hawthorne, C., et al.: Onsets and frames: Dual-objective piano transcription. arXiv preprint arXiv:1710.11153 (2017)
Hawthorne, C., et al.: Enabling factorized piano music modeling and generation with the maestro dataset. arXiv preprint arXiv:1810.12247 (2018)
Hernandez-Olivan, C., Zay Pinilla, I., Hernandez-Lopez, C., Beltran, J.R.: A comparison of deep learning methods for timbre analysis in polyphonic automatic music transcription. Electronics 10(7), 810 (2021)
Maman, B., Bermano, A.H.: Unaligned supervision for automatic music transcription in the wild. In: International Conference on Machine Learning, pp. 14918–14934. PMLR (2022)
Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763. PMLR (2021)
Raffel, C., et al.: Mir_eval: A transparent implementation of common mir metrics. In: ISMIR, pp. 367–372 (2014)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Simonetta, F., Avanzini, F., Ntalampiras, S.: A perceptual measure for evaluating the resynthesis of automatic music transcriptions. Multimedia Tools Appli. 81(22), 32371–32391 (2022)
Wu, Y.T., Chen, B., Su, L.: Multi-instrument automatic music transcription with self-attention-based instance segmentation. IEEE/ACM Trans. Audio Speech Lang. Process. 28, 2796–2809 (2020)
Xi, Q., Bittner, R.M., Pauwels, J., Ye, X., Bello, J.P.: Guitarset: A dataset for guitar transcription. In: ISMIR, pp. 453–460 (2018)
Acknowledgment
This work is supported by the CEUS-UNISONO programme, which has received funding from the National Science Centre, Poland under grant agreement No. 2020/02/Y/ST6/00037. We would like to thank Jȩdrzej Kozal for his support during creation of this work.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Leś, M., Woźniak, M. (2023). Transfer of Knowledge Among Instruments in Automatic Music Transcription. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2023. Lecture Notes in Computer Science(), vol 14125. Springer, Cham. https://doi.org/10.1007/978-3-031-42505-9_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-42505-9_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-42504-2
Online ISBN: 978-3-031-42505-9
eBook Packages: Computer ScienceComputer Science (R0)