Skip to main content

Transfer of Knowledge Among Instruments in Automatic Music Transcription

  • Conference paper
  • First Online:
Artificial Intelligence and Soft Computing (ICAISC 2023)

Abstract

Automatic music transcription (AMT) is one of the most challenging tasks in the music information retrieval domain. It is the process of converting an audio recording of music into a symbolic representation containing information about the notes, chords, and rhythm. Current research in this domain focuses on developing new models based on transformer architecture or using methods to perform semi-supervised training, which gives outstanding results, but the computational cost of training such models is enormous. This work shows how to employ easily generated synthesized audio data produced by software synthesizers to train a universal model. It is a good base for further transfer learning to quickly adapt transcription model for other instruments. Achieved results prove that using synthesized data for training may be a good base for pretraining general-purpose models, where the task of transcription is not focused on one instrument.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.fluidsynth.org/.

  2. 2.

    https://member.keymusician.com/Member/FluidR3_GM/index.html.

  3. 3.

    https://github.com/w4k2/automatic_music_transcription.

References

  1. Allen, J.: Short term spectral analysis, synthesis, and modification by discrete fourier transform. IEEE Trans. Acoust. Speech Signal Process. 25(3), 235–238 (1977)

    Article  MATH  Google Scholar 

  2. Benetos, E., Dixon, S., Duan, Z., Ewert, S.: Automatic music transcription: An overview. IEEE Signal Process. Mag. 36(1), 20–30 (2018)

    Article  Google Scholar 

  3. Brown, J.C.: Calculation of a constant q spectral transform. J. Acoustical Soc. Am. 89(1), 425–434 (1991)

    Article  Google Scholar 

  4. Cheuk, K.W., Agres, K., Herremans, D.: The impact of audio input representations on neural network based music transcription. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–6. IEEE (2020)

    Google Scholar 

  5. Cheuk, K.W., Anderson, H., Agres, K., Herremans, D.: nnaudio: An on-the-fly gpu audio to spectrogram conversion toolbox using 1d convolutional neural networks. IEEE Access 8, 161981–162003 (2020)

    Article  Google Scholar 

  6. Cheuk, K.W., Herremans, D., Su, L.: Reconvat: A semi-supervised automatic music transcription framework for low-resource real-world data. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 3918–3926 (2021)

    Google Scholar 

  7. Cheuk, K.W., Luo, Y.J., Benetos, E., Herremans, D.: The effect of spectrogram reconstruction on automatic music transcription: An alternative approach to improve transcription accuracy. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 9091–9098. IEEE (2021)

    Google Scholar 

  8. Emiya, V., Bertin, N., David, B., Badeau, R.: Maps-a piano database for multipitch estimation and automatic transcription of music (2010)

    Google Scholar 

  9. Gardner, J., Simon, I., Manilow, E., Hawthorne, C., Engel, J.: Mt3: Multi-task multitrack music transcription. arXiv preprint arXiv:2111.03017 (2021)

  10. Hawthorne, C., et al.: Onsets and frames: Dual-objective piano transcription. arXiv preprint arXiv:1710.11153 (2017)

  11. Hawthorne, C., et al.: Enabling factorized piano music modeling and generation with the maestro dataset. arXiv preprint arXiv:1810.12247 (2018)

  12. Hernandez-Olivan, C., Zay Pinilla, I., Hernandez-Lopez, C., Beltran, J.R.: A comparison of deep learning methods for timbre analysis in polyphonic automatic music transcription. Electronics 10(7), 810 (2021)

    Article  Google Scholar 

  13. Maman, B., Bermano, A.H.: Unaligned supervision for automatic music transcription in the wild. In: International Conference on Machine Learning, pp. 14918–14934. PMLR (2022)

    Google Scholar 

  14. Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763. PMLR (2021)

    Google Scholar 

  15. Raffel, C., et al.: Mir_eval: A transparent implementation of common mir metrics. In: ISMIR, pp. 367–372 (2014)

    Google Scholar 

  16. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  17. Simonetta, F., Avanzini, F., Ntalampiras, S.: A perceptual measure for evaluating the resynthesis of automatic music transcriptions. Multimedia Tools Appli. 81(22), 32371–32391 (2022)

    Article  Google Scholar 

  18. Wu, Y.T., Chen, B., Su, L.: Multi-instrument automatic music transcription with self-attention-based instance segmentation. IEEE/ACM Trans. Audio Speech Lang. Process. 28, 2796–2809 (2020)

    Article  Google Scholar 

  19. Xi, Q., Bittner, R.M., Pauwels, J., Ye, X., Bello, J.P.: Guitarset: A dataset for guitar transcription. In: ISMIR, pp. 453–460 (2018)

    Google Scholar 

Download references

Acknowledgment

This work is supported by the CEUS-UNISONO programme, which has received funding from the National Science Centre, Poland under grant agreement No. 2020/02/Y/ST6/00037. We would like to thank Jȩdrzej Kozal for his support during creation of this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michał Leś .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Leś, M., Woźniak, M. (2023). Transfer of Knowledge Among Instruments in Automatic Music Transcription. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2023. Lecture Notes in Computer Science(), vol 14125. Springer, Cham. https://doi.org/10.1007/978-3-031-42505-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-42505-9_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-42504-2

  • Online ISBN: 978-3-031-42505-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics