Lick the Toad: a web-based interface for collective sonification

  • Konstantinos Vasilakos Istanbul Technical University

Resumo


Lick the Toad is an ongoing project developed as a web based interface that runs in modern browsers. It provides a custom made platform to collect user data accessed from mobile devices, such as smartphones, tablets etc. The system offers a tool for interactive collective sonification aiding the idea of networked music performance. It can be used in various contexts, such as onsite installation, interactive compositional tool, or for the distribution of raw data for live coding performances. The system embeds neural network capabilities for prediction purposes by using user input and outputs/targets alike. The inputs and the targets of the training processes can be adapted according to the needs of the use making it a versatile component for creative practice. It is developed as open-source project and it works currently as a NodeJS application with plans for future deployment on remote server to support remote communication and interaction amongst distant users.

Palavras-chave: Artificial Intelligence, A-Life and Evolutionary Music Systems, Computer Music and Creative processes, Real-time Interactive Systems

Referências

Amershi, S., Cakmak, M., Knox, W. B., and Kulesza, T. (2014). Power to the People: The Role of Humans in Interactive Machine Learning. AI Magazine, 35(4):105–120.

Baalman, M. (2020). the machine is learning.

BEER (2021). BEAST FEaST 2021: Concert 3.

Bernardo, F., Zbyszyński, M., Grierson, M., and Fiebrink, R. (2020). Designing and Evaluating the Usability of a Machine Learning API for Rapid Prototyping Music Technology. Frontiers in Artificial Intelligence, 3:13.

Chafe, C. (2001). Ping.

Collins, N. (1974). Pea Soup.

Collins, N. and Escrivan Rincón, J. d. (2011). The Cambridge companion to electronic music. Cambridge University Press, Cambridge. OCLC: 781183853.

Collins, N., McLEAN, A., Rohrhuber, J., and Ward, A. (2003). Live coding in laptop performance. Organised Sound, 8(3):321–330.

CPNAS (2017). Mark Ballora, Seeing with Your Ears: Data Sonification.

Crab, S. (2013). The ‘Telharmonium’ or ‘Dynamophone’ Thaddeus Cahill, USA 1897. https://120years.net/the-telharmonium-thaddeus-cahill-usa-1897/.

de Campo, A. (2007). A DATA SONIFICATION DESIGN SPACE MAP. In Proceedings of the 2nd International Workshop on Interactive Sonification, York, UK, page 4, York, UK. ISon.

Fiebrink, R. and Sonami, L. (2020). Reflections on Eight Years of Instrument Creation with Machine Learning. In Michon, R. and Schroeder, F., editors, Proceedings of the International Conference on New Interfaces for Musical Expression, pages 282–288, Birmingham, UK. Birmingham City University. ISSN: 2220-4806.

Gresham-Lancaster, S. (1998). The Aesthetics and History of the Hub: The Effects of Changing Technology on Network Computer Music. Leonardo Music Journal, 8:39.

Hermann, T., Hunt, A., and Neuhoff, J. G., editors (2011). The sonification handbook. Logos Verlag, Berlin. OCLC: ocn771999159.

Hill, E., Cherston, J., Goldfarb, S., and Paradiso, J. (2017). ATLAS data sonification: a new interface for musical expression. In Proceedings of 38th International Conference on High Energy Physics — PoS(ICHEP2016), page 1042, Chicago, USA. Sissa Medialab.

L. Sturm, B. and Ben-Tal, O. (2017). Taking the Models back to Music Practice: Evaluating Generative Transcription Models built using Deep Learning. Journal of Creative Music Systems, 2(1):1–29.

Manning, P. (2004). Electronic and computer music. Oxford University Press, Oxford ; New York, rev. and expanded ed edition.

Mohon, L. (2021). Data Turned Into Sounds of Stars, Galaxies, Black Holes.

Nilson, C. (2007). Live coding practice. In Proceedings of the 7th international conference on New interfaces for musical expression - NIME ’07, page 112, New York, New York. ACM Press.

Rohrhuber, J., de Campo, A., Wieser, R., van Kampen, J.-K., Ho, E., and Hölzl, H. (2007). Purloined letters. In ”Music in the Global Village Conference, page 7, Műcsarnok Budapest. MV.

Smith, O. and Tacchini, F. (2017). Network Ensemble Selected Network Studies. Number SF0010 in The Strong of the Future. Rizosfera, Via Guido de Ruggiero, 6 – 42123, Reggio Emilia – Italia.

Sosby, M. (2019). Data and Music: What 50 Years of Exploring Our Moon Sounds Like.

TEDx Talks (2013). Listening to data from the Large Hadron Collider Lily Asquith |TEDxZurich.

Vasilakos, K. (2019). Emerging Dark Matter Through Diverse Sonification Practices. In Proceedings of the International Music and Sciences Symposium, pages 719–729, Istanbul. TMDK.

Vasilakos, K. n., Wilson, S., McCauley, T., Yeung, T. W., Margetson, E., and Khosravi Mardakheh, M. (2020). Sonification of High Energy Physics Data Using Live Coding and Web Based Interfaces. In Michon, R. and Schroeder, F., editors, Proceedings of the International Conference on New Interfaces for Musical Expression, pages 464–470, Birmingham, UK. Birmingham City University. ISSN: 2220-4806.

Wilson, S., Lorway, N., Coull, R., Vasilakos, K., and Moyers, T. (2014). Free as in BEER: Some Explorations into Structured Improvisation Using Networked Live-Coding Systems. Computer Music Journal, 38(1):54–64.

Worrall, D. (2019). Sonification Design: From Data to Intelligible Soundfields. Springer Publishing Company, Incorporated, 1st edition.

Zmölnig, I. and Eckel, G. (2007). Live coding: An overview. International Computer Music Conference, ICMC 2007, pages 295–298.
Publicado
24/10/2021
Como Citar

Selecione um Formato
VASILAKOS, Konstantinos. Lick the Toad: a web-based interface for collective sonification. In: SIMPÓSIO BRASILEIRO DE COMPUTAÇÃO MUSICAL (SBCM), 18. , 2021, Recife. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . p. 178-188. DOI: https://doi.org/10.5753/sbcm.2021.19444.