Notation as visual representation of sound-based music

This text describes the musical evaluation of a hybrid music notation system that combines traditional notation with symbols and concepts from spectromorphological analysis. During three academic years from 2017 to 2020, three groups of composition students learned to work with sound notation, recreating and interpreting short electroacoustic music sketches based solely on their notation transcriptions – they had not heard the original sketches. The students’ score interpretations bore obvious similarities to the original music sketches and their written reflections showed that there were no major difficulties in understanding the notation although some difficulties existed concerning finding suitable sounds, especially sounds with stable pitch.


Introduction
Music notation is a central tool for composers of Western art music. The notation and the musical instruments of the Western tradition have developed together, and therefore the instruments' possibilities are largely expressed in terms of their abilities to convey the notation system's parameters. Regardless of an instrument's sonic capabilities, its success within the tradition has depended on its ability to adapt to this shared notation system. The notation thus constitutes a written language through which musical structures can be created and studied independently of the instruments set to perform them. This enables computer-aided composition where musical structures are created as data to be converted to notation for subsequent instrumentation, a practice now integrated in interactive software like Max through notation packages such as Bach (Agostini & Ghisi, 2012) and MaxScore (Hajdu & Didkovsky, 2012). Indeed, digital technologies have provided many new possibilities for musical expression through notation, such as interactive scores and algorithmic notation (Couprie, 2015).
Electroacoustic music, which in general does not require a score for its creation, still has a long history of notated music: for its creation, for performers when synchronising with a tape part, for music analysis, and for guiding the listener as in the case of Wehinger's score for Ligeti's Artikulation (Wehinger, 1970). However, there CONTACT Mattias Sköld mattias.skold@kmh.se Royal College of Music, Department of classical music, composition, and music theory, PO Box 27711 SE-115 91 Stockholm, Sweden is no standard for the notation of electroacoustic music; its symbolic representation is often dependent on personal choices of the composer or on the technology used for sound production. Scores of this music rarely contain the level of detail required for the re-creation and re-interpretation of the musical work. However, there are well-developed analysis tools that could serve this purpose if, like traditional notation, they could be linked to acoustic properties of sound. This is a major strength of traditional music notation -that it relates both to the perception of music and to the acoustic features of music, albeit on a fundamentally structural level, focusing on pitch, rhythm and dynamics. The representation of these perceptually important elements allowed for the advancement of many complex and computational approaches to composition.
The focus of this research project is the visual representation of sound-based music, a music in which the sound and not the musical note is in focus (Landy, 2007). A main question has been: Based on existing and proven models for electroacoustic music analysis, is it possible to develop a notation system that enables the creation of notated sound-based music on the same terms as traditional music notation in the sense that notated works can be interpreted and reinterpreted by different performers and composers? Such notation could facilitate the creation of mixed works of acoustic and electronic sound sources with good integration, since all sounds would be conceived within the same symbolic system. It would also enable new methods of teaching sound-based music theory.
The newsystem is based on traditional music notation and Lasse Thoresen's spectromorphological analysis symbols (Thoresen & Hedman, 2007 from which a hybrid system was formed to visually represent music structures that contain both pitched and non-pitched sound objects. To test the system, three case studies were made with composition students, three consecutive academic years. In these studies, the students worked with the proposed notation system and independently interpreted and realised scores of sound notation so that we could assess whether the results were to be equated with different interpretations of the same work. The study also aimed to evaluate the intelligibility of the notation system in compositional contexts.

Background
For the majority of Western art music history, music was considered an acoustic medium, played or sung, whose core formal structure was constructed of tonal and rhythmic aspects. The idea of music as a mainly pitch-based art form certainly lives on, but throughout the twentieth century, non-pitched sounds and timbre as a musical parameter became increasingly important. The consequences of the introduction of non-pitched sounds in music theory are most clearly seen in academic electroacoustic music, where Pierre Schaeffer's typomorphology (Schaeffer, 2017) and Denis Smalley's spectromorphology (Smalley, 1997) have established themselves as central theories for our understanding of musical sound structures. While we have well-developed tools for the analysis of music conceived as organised sound, we do not have a notation system that accounts for the organised sounds equally. The notation systems used for acoustic and electroacoustic music have different strengths and weaknesses when faced with different kinds of music: Traditional music notation is apt for the representation of pitch-based music in major or minor keys -the music can be read and even 'heard' internally by someone with basic music reading skills. As the music moves into poly-tonal or atonal territory the music becomes harder to read, but when pitches are abandoned altogether the reader is left guessing the sounding outcome unless the sounds made are established instrument-specific techniques such as col legno on the cello. This relates to traditional music notation's capacity to be descriptive of musical structures. Seeger (1958) discussed how notation can be both prescriptive and descriptive, exemplifying the complexity of this relationship with a comparison of a melody's notation and how this relates to a singer's pitch variation over time (Seeger, 1958). As composers of the twentieth century started to explore new sounds, the scores' abilities to describe the sounding result were often lost. The scores of sound-oriented composers like Helmut Lachenmann (1972), a work of musique concrète instrumentale (Orning, 2012), are prescriptive of actions to be performed on specific instrument bodies rather than sounds to be produced. This is not a problem unless one wishes to re-interpret the music on different instruments or analyse the music as sound rather than performance. Then the tools for electroacoustic music analysis are required, such as Thoresen's spectromorphological analysis system (Thoresen & Hedman, 2007). But although such tools work well for analysis, they were not designed to be prescriptive of musical performances. Also, because of the emphasis on timbre in such analysis frameworks, they tend to provide less information about tonal structures. As soon as identifiable pitches are introduced in music, the nature of their structural relations is of importance for the analysis.

Previous research
In the first book of his treatise, Pierre Schaeffer (2017) enters the discussion of a new experimental music from the perspective of existing music practice related to music making and hearing, pointing out how the two are connected. Skilled practitioners and experienced listeners of Western art music know to listen for the sounds relevant to the musical structures of the genre and draw conclusions from their organisation. But the same listeners may fail to pick up on values not part of the genres they are familiar with (Schaeffer, 2017). Schaeffer's typomorphology is an attempt at redefining what constitutes the criteria for these musical values to listen for. Schaeffer believes this requires that one actively avoids listening to sounds as sources. This act of avoidance is what Schaeffer calls reduced listening. 1 The basic concepts he introduces in this endeavour are three pairs: mass and facture, duration and variation, and balance and originality. From these concepts he constructed his famous TARTYP diagram, shown in Figure 1 (Schaeffer, 2017).
Schaeffer's Treatise focus is on the description of sound objects, building the foundation for a new music theory with the aim of making the composition of new music possible (Schaeffer, 2017). Denis Smalley's theory of spectromorphology, builds on Schaeffer's ideas to describe and analyse this new music (Smalley, 1997). In other words, where Schaeffer was more concerned with the building blocks of music, Smalley's ideas concern the music itself, as it unfolds over time. He is not primarily discussing types of objects, but the morphologies of spectra perceived in musical structures. Like Schaeffer, Smalley's starting point is the perception of sound while actively avoiding assumptions regarding the acoustic sources or technical tools used for its conception. He introduces the concepts of gesture and gesture surrogacy as means for discussing the experience of energy-motiontrajectories in electroacoustic music that, if they had been produced in an acoustic context, would be the results of physical activities. Smalley suggests that we experience these gesture activities in a sound even when it is not the product of any underlying physical activity. He also introduces several music analysis categories (and subcategories) that more specifically deal with musical expression, such as motion and growth processes, texture motions and spatiomorphology (Smalley, 1997). Smalley's workable music analysis development of Schaffer's ideas is reflected in Lasse Thoresen's spectromorphological notation (Thoresen & Hedman, 2015) derived from Schaeffer's typomorphology. As the article name suggests (Spectromprohological analysis of sound objects), Thoresen in his adaptation of Schaeffer's concepts also considers Smalley's work when adapting the typomorphology for notation of sound-based musical structures. While all three of these music theorists approach sound and music analysis as descriptions of what is individually perceived, there is an underlying idea of inter-subjectivity in identifying and describing particular aspects of sound objects and/or music. Though he strongly rejects the relevance of physical measurements of the musical signal, Schaeffer argues that, given a shared listening intention, groups of people will identify the same characteristics in the sound objects (Schaeffer, 2017). In that regard, it is not hard to relate his work and methods to subsequent music psychology research, such as Grey's (1977) influential explorations of the dimensions of timbre.

Using analysis theory for composition
Using concepts and symbols developed for electroacoustic analysis in the composition process is not a new approach: Kevin Patton developed a morphological notation system based on Smalley's spectromorphology where Patton had graphical symbols placed in a 3D space with the x-, y-, and z-axes representing time, frequency/pitch, and timbral characteristics (Patton, 2007). In a similar vein, Manuella Blackburn suggests using sound-shape illustrations related to spectromorphological categories to help in the composition process (Blackburn, 2011). Starting from Schaeffer's typomorphology, Israel Neuman developed Max software for generating sound structures live from the sound objects of Schaeffer's TARTYP, also displaying animated scores using Thoresen's symbols (Neuman, 2015)). Even closer to the work presented here, Ricardo Climent already in 2008 used Thoresen's symbols for the notation of a work for percussion ensemble and tape, introducing the term searched objects for when the percussionists were asked to recreate ideal sound samples to discover the instruments Figure 2. Overview of the basic elements for the notation of a sound object (Sköld, 2020). and playing techniques needed for performing the score (Climent, 2008).
When music analysis tools are used for composition, there is an implied connection made between the musical ideas and their physical manifestations, particularly for electroacoustic music -the concepts and symbols will eventually result in a (measurable) physical signal. Even though pitch and frequency are two different concepts not to be confused, much of modern music production has correlations between frequencies and pitches built into its working environment. Software such as Logic Pro works with both pitch and frequencies for control and visual representation. This is not proof that the two concepts are one and the same or even compatible, but it shows that for the sake of music composition they can co-exist. Pre-defined connections between musical scores and their physical manifestations can also be found in software offering automated analysis of music. This includes basic functions like pitch tracking of audio files in Logic Pro as well as all forms of score following (Orio et al., 2003) which presuppose that computers can relate scores to their resulting sounding performances. Starting from the physical signal, Andrew Blackburn and Jean Penny suggest a timbral notation developed from spectrograms for representing electroacoustic music (Blackburn & Penny, 2015). There are also examples of using MIR techniques for the analysis of sound-based music (Klien, et al. 2012).

About the proposed notation system
The basic idea of the new notation system was to combine symbols and concepts from Thoresen and Hedman's spectromorphological analysis (Thoresen & Hedman, 2015) with traditional notation of pitch and rhythm. The analysis symbol system is modified for use with prescriptive notation, meaning that all symbols are redefined in relation to perceivable acoustic properties of the sound signal. Sound spectra are categorised according to Thoresen's division of sound spectra into pitched, complex and dystonic sounds, each with their dedicated note shape (circle, square, and diamond). As in the analysis system, extension lines from the note heads indicate the sounds' progress over time. Also, several new symbols are added, such as displays of spectral width and references to recognisable spectra. See Figure 2 for an overview of basic elements for the notation of a single sound object. A longer presentation of the new notation system and how it relates to Thoresen's analysis system can be found in Sköld (2020). The most conspicuous changes resulting from the adaptation of the analysis system is the backdrop of a spectrum staff system where both indications of pitch and frequencies co-exist. The idea of using this kind of grand staff system to display pitch and frequency information comes from interactive music software such as PatchWork and Max, 2 where ready-made graphical objects can display pitch data in this way. As will be evident from the score examples in the case studies below, the system has undergone several changes along the way, many of which were the direct result of feedback from these studies. The notation system is not intended to replace the analysis techniques from which it was developed. Rather, it is meant to provide composers and performers of soundbased music with additional tools for working with sound structures on a conceptual level in a notation language with explicit correlations with the acoustic properties of sound.

Introduction
Once a prototype of the notation system was made, the first step was to test the notation with composition students at KMH, the Royal College of Music in Stockholm. Studying sonology is compulsory for all first-year bachelor students in composition at KMH, both for students from the electroacoustic and the Western art music profile. It was first taught as part of the course on musique concrète, before becoming an independent course module. The purpose of this module is not primarily to teach analysis practice, but rather to provide a vocabulary and a music theory framework both for analysis and for structured composition with sounds. Since the notation system was in part conceived in reply to demands from teaching sonology, this course module provided a natural opportunity for testing its functionality. Because the groups of students attending the course are always mixed (acoustic and electroacoustic), some will have a better pre-understanding of working with scores and notation while others will be more comfortable with studio work and sound processing.
What follows are accounts of three case studies from 2017 to 2020. Results from the first study in 2017 have been published before in a text introducing the first steps of this research (Sköld, 2018). In total, 25 composition students participated in these studies. No student took part in more than one study. The participants are coded as 17:1-7, 18:1-12 and 20:1-6, relating them to the year of each study. The individual numbering was randomised. From 2017, there was one score assignment while there were two in 2018 and 2020.

Method
The method was the same for each assignment: • In preparation, a short electroacoustic music sketch was made by the author using a mix of electronic and recorded acoustic sounds. • This sketch was transcribed by the author using the sound notation system, producing a single-page score in pdf format. • The individual participants were then given the score (not having heard the original sound sketch) with the instruction to find, record and arrange sounds to realise the score as an electroacoustic music study. • The groups from 2018 and 2020 were also instructed to provide written reflections on their assignment work. The 2017 group gave oral feedback.

The research data
Each score assignment resulted in a collection of individual re-interpretations of the original sketch, as sound files. These were accompanied by text files with written reflections. Two participant re-interpretations of a music sketch have been made available online together with all the original sketches. 3 Otherwise, for ethical reasons the actual texts and musical work by the students are not published (the participants have however given their written consent to take part in the study). Spectrogram analyses and long-time average spectrum (LTAS) analyses are provided comparing the original sound sketches to mixes from the participants' score interpretations from 2018 and 2020 so that the reader can see the timbre-related similarities. From 2020, only the second assignment has the spectrogram comparison.

Case study 1, autumn 2017
Introduction This is part of a 2017 study previously published in 2018 (Sköld, 2018). The assignment was included in the Sonology course module in the course Sonology and Studio Technology (KMH, 2012). Besides the score realisations discussed here, the course module also included short composition assignments.

Participants
Seven composition students gave their consent to participate in the study (4F, 3M). Their average age was 27.71 years (SD = 6.92 years).

The assignment
The original electroacoustic sketch from which the assignment score (Figure 3) originated was a mixture of electronic and recorded acoustic sounds. This was to see if the participants could find suitable sounds to match the notation, regardless of how the original sounds were produced. The sketch starts with sustained noise, bursts of granular, buzzing electronic sounds and ends with a flock of seagulls. See Figure 4 for a spectrogram of the original sketch -the audio file is available online. 4 The notation system was at this stage relatively close to Thoresen's analysis notation (Thoresen & Hedman, 2015) but placed over the aforementioned grand staff system, with traditional rhythm notation (beams and rests) integrated with the spectral symbols. Spectral width was indicated as vertical dashed lines for the complex (non-pitched) sounds. There was no indicator for amplitude or overall 4 https://doi.org/10.5281/zenodo.7071329 dynamics (which was not in Thoresen's analysis notation either). The participants were asked to realise the score (Figure 3) as fixed media electroacoustic music, using only recorded acoustic sounds. The use of established musical instruments or voices was not allowed, so the participants would need to search for sounds not already part of an established composition practice. To make the sounds fit the spectral characteristics of the notated sound objects, they were allowed to transpose and filter the sounds. The type of filters allowed was not specified, but low-pass and high-pass filters were recommended. Otherwise, effects processing was not allowed. Besides the score and the assignment instructions, the participants received a compendium of the notation system and a frequency-to-pitch conversion chart.

Results
Based on the participants' assignments and their observations during the seminars, a few initial problems were identified regarding the intelligibility of the notation system: • The stems, beams and rests inside the spectral notation caused confusion -how was one to know that a quarter note stem was not a spectral feature when other vertical lines indicated spectral width or the interconnection between related sound objects? • The long two-note iterated sound object starting in the fifth measure (see Figure 3) caused much confusion as evidenced by a variety of interpretations of this sound object in the score realisations. • Also, the participants discussed the lack of dynamic indications in the score, which was reflected in the diverse dynamic profiles of their assignments.
These problems aside, there were obvious music structural similarities between the original sketch and the participants' assignments. The similarities had to do with the relative spectral changes prescribed by the notation since all participants used different sounds for their score realisations.

Modifications
Based on the assignment results and the participants' feedback, modifications were made to the notation system and practice: • Spectral and rhythm notation should be kept separate, placing the notation of rhythm on separate single-line staves below the spectral staves. • The dashed extension line to indicate iteration was removed since such sounds can just as well be notated as cases of granularity -what in Thoresen's system is called granular gait.
• Unrelated to the assignment, small note heads were introduced to indicate spectral components such as significant partials of an inharmonic spectrum. • Furthermore, notation of dynamics was introduced.

Introduction
This time there were two score assignments, one simpler and one more advanced. Instead of a separate composition assignment the participants were asked to compose very short continuations of the assignment scores. The student group was larger this time.

Participants
Twelve composition students gave their consent to participate in the study (2F, 10M). Their average age was 24.83 years (SD = 3.08 years).

First assignment
The modifications mentioned above were implemented in this assignment score (See Figure 5), as exemplified by the rhythm notation below the spectral staves and the indications of dynamics. The original electroacoustic music sketch was again a mix of electronic and recorded acoustic sounds, starting with an electronic machine-like rhythmic bass, followed by noise fading in, a truck engine starting, three more bass notes, four sounds of different timbre, dynamics and complexity ending with the fade in of audience applause. The audio file is available online. 5 Again, the participants were asked to realise the score ( Figure 5), not having heard the original sketch. They could only use recorded acoustic sounds. Voices and traditional instruments were not allowed. They were encouraged to find and record sounds in their environment, though this was not a requirement. Transposition and filtering were allowed. As in the previous year, the participants were provided a compendium of the notation system and a frequency-to-pitch conversion chart. Also, if needed they could download a compendium on how to filter sounds based on their analysed spectral content using Logic Pro's Channel EQ and its spectral analyser function. 5 https://doi.org/10.5281/zenodo.7071329

Results
Two out of 12 participants did not describe having any problem with the task but simply detailed their progress, and another four wrote positively about the assignment though they described certain challenges. Four participants made observations related to listening: 'It feels like an exercise in listening . . . ' (participant 18:8). Four expressed different difficulties in reading the notation: ' . . . how much may the sound vary before it should need a different notation?' (participant 18:1). Seven out of 12 mentioned the difficulty of finding sounds with stable pitch and four mentioned the difficulty of finding stable sounds overall. Six participants mentioned sound technology-related challenges -isolating sounds while recording or working with a Digital Audio Workstation.
The participants' score realisations resembled the original sketch as demonstrated by the spectrogram Figure 6. The first 2018 notation assignment: At the top is a spectrogram of the original electroacoustic sketch, in the middle is its score transcription. At the bottom is the spectrogram of a mix of ten participants' realizations of the score (not having heard the original sketch). comparison in Figure 6 and long-time average spectrum (LTAS) analysis comparisons in Figure 7. Both figures compare the original sound sketch to a mixdown of the participants' assignments (two assignments were not part of the mix because they did not have the same time structure as the others, making spectral comparisons more difficult). The long-time average spectrum shows the spectral distribution of a sound (Elowsson & Friberg, 2017, may), not considering changes over time. These were included as complements to the spectrograms. As in the previous case, it was the relative spectral differences between the sound objects that made the assignments similar, since they used different sound sources for their score realisations.

Second assignment
The idea behind having two score assignments for the group was to be able to introduce more complexity for the second assignment and to see to what degree the experience of the first assignment work could benefit the second. The second assignment's music was composed and notated in two layers (see Figure 8), starting with pitched sound objects of different characteristics (the first and third with vibrato) over a backdrop of a high-frequency bandwidth non-pitched accumulation and short individual non-pitched sounds. Then followed the start of an electric fan notated as the combination of two pitched sounds and one non-pitched sound, with an amplitude variation curve for the combined object. The sketch ends with three suspended cymbal hits and a dense piano chord. The original audio file and one participant's interpretation of its score transcription are available online. 6 The participants were now allowed to use musical instruments, but still no voices or synthesisers. Transposition and filtering were allowed. They had the same 6 https://doi.org/10.5281/zenodo.7071329 resources as with the first assignment but with additional symbols and notation functionality added to the compendium. These were indications of speed and variation. Indications of speed can be used to indicate the speed of the granular aspect of a sound object, while variations are waveform or curve indicators to show how different musical values vary over the course of a sound object. Variation indicators signify changes in terms of articulation rather than structure.

Results
Eight out of 12 participants expressed that the assignment was easier or much easier this time. This was attributed to different things, e.g. being more familiar with the notation system, being more familiar with the process of realising this kind of score, and being allowed to use musical instruments: '[The work was faster because of] a better understanding of the notation and a now somewhat greater knowledge of how to work in [my Digital Audio Workstation].' (participant 18:5) The remaining four did not address the overall difficulty but simply detailed their process. Two participants expressed confusion over the interpretation of the combined sound object starting at the end of the second measure of the lower staff. One had questions regarding the notation of dynamics.
As in the previous assignment, the participants' score realisations resembled the timbre structure of the original sound sketch, again demonstrated by the spectrogram comparison between the original sketch and a mix of all participants' assignments shown in Figure 8 and the corresponding LTAS comparison in Figure 7 of the same sound files. The aspect of the score realisations with the least agreement was the dynamic envelope of the individual sound objects. One participant's re-interpretation is available online: 7 This was one of the assignments with the greatest similarity to the original sketch, because of how well the music represents the notation symbols, but also because of the choice of sound sources.

Modifications
As a result of the assignments and written reflections, a specific indicator for amplitude envelope was created, to be placed below a sound object when appropriate. The symbol is introduced as a special case of variation -an equivalent of Schaeffer's concept allure (Schaeffer, 2017).
There were also other additions and modifications made to the notation system following the 2018 case study, but these were not related to the work with these assignments. Such modifications include new symbols for spatialisation (Sköld, 2019) as well as indicators for spectral centroid and spectral density -the frequency centre of a sound's spectral energy and the spectrum's density of partials with high amplitude.

Case study 3, spring 2020
Introduction This year, the framing of the sonology module was different: Sonology had become part of the course Sonology and Musique Concrete (KMH, 2019). As in 2018 there were two score assignments -one simpler and one more advanced.

First assignment
The first assignment sound sketch (see Figure 9) starts with four fixed-pitch air raid sirens in the distance, the fade in and out of low-frequency noise, the strike of a low gong followed by the squeak and closing of a door. It ends with the ticking of a grandfather clock accompanied by the fade-in of high-frequency noise. For the sake of simplicity, the noise component of the introductory sirens was not included in the transcription. The audio file is available online. 8 As in the first assignment in 2018, only recorded acoustic sounds were allowed, and no voices or musical instruments. Transposition and filtering were allowed.

Results
In the written reflections, no participant expressed problems understanding the notation. Two participants wrote about challenges when matching spectral features and recorded acoustic sounds: I thought the most difficult thing was to imagine what it was meant to sound like and perhaps also what the harmonic spectrum really meant, how much should be cut off, how loudly the harmonics should appear, and so on. (participant 20:5) One participant mentioned the difficulty of finding sustained pitched sounds.
The score realisations resembled the timbre structure of the original sound sketch, but not to a larger degree than in the first 2018 study. Figure 7 shows the Longtime average spectrum (LTAS) analysis comparisons of the original sketch and a mixdown of all assignments produced by the participants from the score transcription of the original sketch.

Second assignment
As in 2018, the second assignment was more complex, using two layers and more detailed notation (see Figure 10). The layer of the lower staff consisted of percussion sounds of different timbres, two with specified separate attack components with indicators for spectral centroid. One dystonic sound (clusters or inharmonic spectra) had a specified significant partial. The upper staff had electronic granular sounds with specified high spectral density ending with the crescendo of a sustained pitched sound with vibrato. The original audio file and one participant's interpretation of its score transcription are available online. 9 This time any recorded acoustic sounds were allowed, including musical instruments and voices. The participants were also allowed to use sound synthesis but only Figure 9. The score for the first 2020 notation assignment and above a spectrogram of the original sketch.
to produce partials of sound objects. Transposition and filtering were allowed, and they could add reverb to the whole mix.

Results
Four out of six expressed no difficulty reading the notation or did not address this difficulty. Two participants described challenges interpreting specific spectral parameters: [I was uncertain how to] interpret the beginning and end of the vertical lines when they overlapped the note lines unevenly, but I assumed that it was not so precise and rounded first approximately and then to the nearest semitone. (participant 20:1) Two had difficulties realising the sound object with a 4 Hz speed vibrato. Two discussed how the sound character changes when changing specific spectral parameters: When filtering a complex sound with too large spectral width, 'The solution then should be to filter [the sound], but then I experience that the sound changes very much, and that it can even become pitched' (participant 20:2).
As in the previous assignment, the participants created score realisations that resembled the timbre structure of the original sketch. See Figure 10 and Figure 7 for spectrogram and LTAS analysis comparisons of the original sketch and a mixdown of the assignments produced by the participants from the score transcription of the original sketch (one participant's sound sketch is not part of the spectrogram comparison in Figure 10 because of different timing). Judging by the participants' written reflections, if sounds deviated from the notated objects, it was more likely a result of the difficulty in finding and editing the sounds to match the notation than an issue with understanding the meaning of the symbols.
One participant's re-interpretation is available online. 10 As in the example from 2018, this was an assignment with great similarity to the original sketch, though realised with different sounds.

Modifications
No modifications to the notation system were made as results of the 2020 case study. Figure 10. The second 2020 notation assignment: At the top is a spectrogram of the original electroacoustic sketch, in the middle is its score transcription. At the bottom is the spectrogram of a mix of five participants' realizations of the score (not having heard the original sketch).

Discussion
The core of the problem of visually representing music lies in what Schaeffer describes as the division of music perception into objective and subjective layers. In our Western music tradition, the identity of a musical work is related to a shared musical language expressed by specified sound sources. This means that additional auditory information added during a work's performance has no impact on the identity of a musical work if the structure related to the musical language is intact -The Moonlight sonata played badly is still The Moonlight sonata. For musicians, it is particularly the expressions of musical language that concern work identity. This was demonstrated by Wolpert (1987Wolpert ( , 1990 in studies where musicians heard similarities between music of the same melodic and harmonic structure while nonmusicians relied on timbre similarities for the recognition of music. In other words, for a musician, The Moonlight sonata played on a Moog synthesiser is still The Moonlight sonata. Initially, electroacoustic music escaped these structural constraints, both because of how technology made it possible to transcend the values of traditional music notation, and because of how in electroacoustic tape music, composition and performance formed an integrated process. However, looking back at 70 years of electroacoustic music it seems that pitch, rhythm, dynamics, and timbre were not replaced as parameters of musical expression (though a fifth parameter was added, viz. space, as foreseen by Varèse (1967)). But while traditional music theory is focused on pitch and rhythm which are the core elements of its notation language, electroacoustic music theory is centred around expressions of timbre. Schaeffer's typomorphology can be seen as the first systematic study of timbre as a musical parameter. And among the merits of Smalley's and Thoresen's development of Schaeffer's ideas is the introduction of terms and tools for us to describe timbre in sound-based music in ways that make sense for music analysis. Timbre was not a new musical parameter, but it was not part of the Western musical language either. These theories made timbre available for structured composition and analysis.
Besides showing the music parameter bias of musicians and non-musicians, Wolpert's studies (Wolpert, 1987(Wolpert, , 1990 teach us how practitioners of music (naturally) have an insider's craft-related perspective on the identity of music. It is this kind of insider's perspective that we can get from the case studies presented here. All participants were composers of contemporary music. They had their own ideas of musical structures and identities but as with traditional notation, they could find common ground in a shared system of representation.
One should not draw too far-reaching conclusions regarding the sound notation's capacity for preserving the identity of an electroacoustic music work. This is because there is no simple answer to the question of what parameters and level of detail that guarantee the identity of a particular work. However, the author's assessment is that the collected score interpretations in these studies each time were musically closely related to the original sound sketches from which their scores were produced, even though some sounds of the original sketches were electronic while the participants had to rely almost solely on recorded acoustic sounds for their assignment work. This was exemplified by the two provided example score realisations from participants in 2018 and 2020 respectively. The spectral similarities between the original sketches and the works made from their transcriptions are visible in the spectrogram comparisons in Figures 6, 8 and 10, complemented by the long-time average spectrum comparisons in Figure 7. The spectra are not identical, since the actual sound sources are not specified in the scores, and even if the participants could read the notation, finding acoustic sounds to match was at times difficult. However, the comparisons in Figure 7 also show increasing spectral similarities from the first to the second assignment both for the 2018 and 2020 studies, so there was an element of learning involved. The intelligibility of the notation was only occasionally questioned in the written reflections though there were, especially in the first assignments, problems with specific notation symbols and functions which were solved for later versions of the notation system. Otherwise, most of the difficulties expressed by the participants concerned finding and shaping sounds to match the score. In the assessment of the reflections, it is of course worth considering that the students were addressing their grading teacher.

Conclusion and future work
The three case studies presented here show that a sound notation system, based on a combination of phenomenological electroacoustic music analysis and traditional notation, can describe sound-based musical structures with sufficient detail for music creation. A musical phrase of pitched and non-pitched sounds can maintain a form of musical identity as it is transcribed and re-interpreted by different musicians using this notation. Furthermore, the written feedback from the participants of these studies shows that the notation system was intelligible, particularly following the modifications made after the 2017 and 2018 studies. The participants also described the challenges with realising scores of this kind because the notation does not provide any clues regarding the actions to be performed. Also, the participants' recurring problems with finding sustained pitched sounds outside the world of musical instruments reveal how integrated traditional instruments and their notation language are.
There are many routes one can take in developing this notation further. In making the notation compatible with traditional notation the system can possibly be introduced in whatever situation traditional notation is normally used. This involves, transcribing improvised music for making new interpretations possible, which has been done to great extent with traditional notation in folk music and jazz. 11 Electroacoustic music ear training is another field that would benefit from sound-based notation. This subject is generally taught in the same manner as sound engineer ear training though there are exceptions (Tsabary, 2015). Sound notation could also enable the composition of mixed works for acoustic and electronic elements where the two sound worlds can be conceptually formulated using the same notation language and only later divided between the two sound sources. The challenges the case study participants were facing as they searched for appropriate sounds for a notation based on sound and its perception could be welcomed by specialist contemporary music performers who know how to produce different sounds on their instruments. Anyone interested in this approach to composition and teaching is encouraged to work with this notation system, and any resources developed in connection with this research will be made freely available.

Data availability
Due to the nature of this research, participants of this study did not agree for their data to be shared publicly, so supporting data are not available, other than the selected data files available from the links supplied above.

Ethics declaration
At the time of the case studies, my institution did not require the approval of an ethics committee for this kind of case studies. That being said, all participants have signed consent forms as to their participation and the use of their data in this research, and specific consent has been given regarding the publication of the two student assignments.

Disclosure statement
No potential conflict of interest was reported by the author(s). 11 An example is Keith Jarrett's Köln Concert (Jarrett, 1975) which was improvised and later transcribed and performed by other pianists.