Abstract
In this chapter, we review and summarize different methods for the evaluation of CSEMPs. The main categories of evaluation methods are (1) comparisons with measurements from real performances, (2) listening experiments, and (3) production experiments. Listening experiments can be of different types. For example, in some experiments, subjects may be asked to rate a particular expressive characteristic (such as the emotion conveyed or the overall expression) or to rate the effect of a particular acoustic cue. In production experiments, subjects actively manipulate system parameters to achieve a target performance. Measures for estimating the difference between performances are discussed in relation to the objectives of the model and the objectives of the evaluation. There will be also a section with a presentation and discussion of the Rencon (Performance Rendering Contest). Rencon is a contest for comparing the expressive musical performances of the same score generated by different CSEMPs. Practical examples from previous works are presented, commented on, and analysed.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
The scores were polyphonic and were selected from a battery of stimuli specially composed at Montreal University (www.brams.umontreal.ca/plab/downloads/Emotional_Clips.zip). This battery consisted of 14 scores per emotion. All 14 × 4 scores were rated along four adjective scales (happy, sad, scary, and peaceful) in a previous study [39]. The highest rated and least ambiguous score for each emotion was selected as stimulus in our experiment.
- 2.
- 3.
- 4.
- 5.
Strong accept = 5, weak accept = 4, borderline = 3, weak reject = 2, strong reject = 1.
- 6.
- 7.
- 8.
References
Widmer G, Goebl W (2004) Computational models of expressive music performance: the state of the art. J New Music Res 33(3):203–216
Canazza S, De Poli G, Drioli C, Roda A, Vidolin A (2004) Modeling and control of expressiveness in music performance. Proc IEEE 92(4):686–701
Bresin R, Friberg A (2000) Emotional coloring of computer controlled music performance. Comput Music J 24(4):44–63
Todd NPMcA (1985) A model of expressive timing in tonal music. Music Percept 3:33–58
Friberg A, Bresin R, Frydén L, Sundberg J (1998) Musical punctuation on the microlevel: automatic identification and performance of small melodic units. J New Music Res 27(3):271–292
Cambouropoulos E (1988) Towards a general computational theory of musical structure. PhD thesis, Faculty of Music and Department of Artificial Intelligence, University of Edinburgh
Ahlbäck S (2004) Melody beyond notes. A study in melody cognition. PhD thesis, Department of Musicology, Göteborg University
Bresin R (1998) Artificial neural networks based models for automatic performance of musical scores. J New Music Res 27(3):239–270
Goebl W, Dixon S, De Poli G, Friberg A, Bresin R, Widmer G (2008) Sense in expressive music performance: data acquisition, computational studies, models. In: Polotti P, Rocchesso D (eds) Sound to sense - sense to sound: a state of the art in sound and music computing. Logos Verlag, Berlin, pp 195–242
Goebl W, Widmer G (2009) On the use of computational methods for expressive music performance. In: Crawford T, Gibson L (eds) Modern methods for musicology: prospects, proposals and realities. Ashgate, Aldershot, pp 93–113
Repp BH (1992) Diversity and commonality in music performance: an analysis of timing microstructure in Schumann’s “Träumerei”. J Acoust Soc Am 92(5):2546–2568
Friberg A, Sundström A (2002) Swing ratios and ensemble timing in jazz performance: evidence for a common rhythmic pattern. Music Percept 19(3):333–349
Repp BH (1999) A microcosm of musical expression: II. Quantitative analysis of pianists’ dynamics in the initial measures of Chopin’s Etude in E major. J Acoust Soc Am 105:1972–1988
De Poli G (2004) Methodologies for expressiveness modeling of and for music performance. J New Music Res 33(3):189–202
Friberg A (1995) Matching the rule parameters of Phrase arch to performances of “Träumerei”: a preliminary study. In: Friberg A, Sundberg J (eds) Proceedings of the KTH symposium on Grammars for music performance, 27 May 1995, pp 37–44
Friberg A, Sundberg J (1995) Time discrimination in a monotonic, isochronous sequence. J Acous Soc Am 98(5): 2524–2531.
Repp BH (1995) Detectability of duration and intensity increments in melody tones: a partial connection between music perception and performance. Percept Psychophys 57(8):1217–1232
Zanon P, De Poli G (2003) Estimation of parameters in rule systems for expressive rendering in musical performance. Comput Music J 27:29–46
Zanon P, De Poli G (2003) Estimation of time-varying parameters in rule systems for music performance. J New Music Res 32(3):295–316
Todd NPMcA (1989) A computational model of rubato. Contemporary Music Review 3:69–88
Juslin PN, Friberg A, Bresin R (2002) Toward a computational model of expression in performance: the GERM model. Musicae Scientiae, Special issue 2001–2002, 63–122
Sundberg J, Friberg A, Bresin A (2003) Attempts to reproduce a pianist’s expressive timing with Director Musices performance rules. J New Music Res 32(3):317–326
Friberg A, Battel GU (2002) Structural communication. In: Parncutt R, McPherson GE (eds) The science and psychology of music performance: creative strategies for teaching and learning. Oxford University Press, New York, pp 199–218
Marsland S (2009) Machine learning: an algorithmic perspective. Chapman & Hall/CRC, Boca Raton
Friberg A, Sundberg J (1999) Does music performance allude to locomotion? A model of final ritardandi derived from measurements of stopping runners. J Acoust Soc Am 105(3):1469–1484
Widmer G (2003) Discovering simple rules in complex data: a meta-learning algorithm and some surprising musical discoveries. Artif Intell 146(2):129–148
Widmer G (2002) Machine discoveries: a few simple, robust local expression principles. J New Music Res 31:37–50
Friberg A, Bresin R, Sundberg J (2006) Overview of the KTH rule system for musical performance. Adv Cogn Psychol Spec Issue Music Perform 2(2–3):145–161
Bresin R (2001) Articulation rules for automatic music performance. In: Schloss A, Dannenberg R, Driessen P (eds) Proceedings of the international computer music conference – ICMC 2001. ICMA, San Francisco, pp 294–297
Goebl W (2001) Melody lead in piano performance: expressive device or artifact? J Acoust Soc Am 110(1):563–572
Bjurling J (2007) Timing in piano music - a model of melody lead. Master of Science thesis, KTH Royal Institute of Technology, School of Computer Science and Communication, Stockholm, Sweden. ISSN-1653–5715. http://www.nada.kth.se/utbildning/grukth/exjobb/rapportlistor/2007/rapporter07/bjurling_johan_07115.pdf
Friberg A (2006) pDM: an expressive sequencer with real-time control of the KTH music performance rules. Comput Music J 30(1):37–48
Bjurling J, Bresin R (2008) Timing in piano music – testing a model of melody lead. In Proceedings of ICMPC 10, Sapporo
Gabrielsson A, Lindström E (2010) The role of structure in the musical expression of emotions. In: Juslin PN, Sloboda JA (eds) Handbook of music and emotion: theory, research, applications. Oxford University Press, Oxford, pp 367–400
Juslin PN, Timmers R (2010) Expression and communication of emotion in music performance. In: Juslin PN, Sloboda JA (eds) Handbook of music and emotion: theory, research, applications. Oxford University Press, Oxford, pp 453–489: 2001–2002
Repp BH (1997) Acoustics, perception, production of legato articulation on a computer-controlled grand piano. J Acoust Soc Am 102(3):1878–1890
Bresin R, Battel GU (2000) Articulation strategies in expressive piano performance. Analysis of legato, staccato, repeated notes in performances of the Andante movement of Mozart's sonata in G major (K 545). J New Music Res 29(3):211–224
Bresin R, Friberg A (2011) Emotion rendering in music: range and characteristic values of seven musical variables. Cortex 47(9):1068–1081
Vieillard S, Peretz I, Gosselin N, Khalfa S, Gagnon L, Bouchard B (2007) Happy, sad, scary and peaceful musical excerpts for research on emotions. Cogn Emot 22(4):720–752
Hashida M, Nakra M, Katayose H, Murao T, Hirata K, Suzuki K, Kitahara T (2008) Rencon: performance rendering contest for automated music systems. In: Proceedings of international conference on music perception and cognition (ICMPC 2008)
Hiraga R, Hashida M, Hirata K, Katayose H, Noike K (2002) RENCON: toward a new evaluation method for performance rendering system. In: Proceedings of internatioal computer music conference, pp 357–361
Hiraga R, Bresin R, Hirata K, Katayose, H (2003) Rencon in 2002. In Proceedings of IJCAI-03 rencon workshop, Acapulco, Mexico, pp 59–64
Hiraga R, Bresin R, Hirata K, Katayose H (2004) Rencon 2004: turing test for musical expression. In NIME ‘04: proceedings of the 4th international conference on New interfaces for musical expression, Hamamatsu, Shizuoka, Japan, pp 120–123
Hiraga R, Bresin R, Katayose H (2006) Rencon 2005. In Proceeding of the 20th annual conference of the Japanese Society for Artificial Intelligence (1D2–1)
Noike K, Toyoda K, Katayose H (2005) An initial implementation of corpus based performance rendering system “COPER”. Info Process Soc Jpn (IPSJ) 2005(14):67–70
Hashida M, Nagata N, Katayose H (2005) A study of description capability of performance characteristics on PopE. In: The 19th annual conference of JSAI
Widmer G, Flossmann S, Grachten M (2009) YQX plays chopin. AI Mag 30(3):35–48
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Questions
Questions
-
1.
What are the two main evaluation methods that can be identified overall in CSEMPs?
-
2.
What is the difference between generality and flexibility in a model?
-
3.
For the comparison with ground truth data approach, name the three ways of modelling and evaluating a system.
-
4.
In which of the above three approaches is the default evaluation method implicit in the methodology?
-
5.
Describe the analysis-by-synthesis modelling approach.
-
6.
What are the elements which may cause the melody lead effect in human piano playing?
-
7.
Describe an interaction listening test.
-
8.
What is one possible way of establishing an evaluation method which could be applied to different CSEMPs?
-
9.
What is an important aspect which could contribute towards evaluation being done more seriously?
-
10.
How might the major issue of lack of performance data and limited test material be addressed by the research community?
Rights and permissions
Copyright information
© 2013 Springer-Verlag London
About this chapter
Cite this chapter
Bresin, R., Friberg, A. (2013). Evaluation of Computer Systems for Expressive Music Performance. In: Kirke, A., Miranda, E. (eds) Guide to Computing for Expressive Music Performance. Springer, London. https://doi.org/10.1007/978-1-4471-4123-5_7
Download citation
DOI: https://doi.org/10.1007/978-1-4471-4123-5_7
Published:
Publisher Name: Springer, London
Print ISBN: 978-1-4471-4122-8
Online ISBN: 978-1-4471-4123-5
eBook Packages: Computer ScienceComputer Science (R0)