Abstract
Mixing music, or music production, is the process of combining a series of different musical tracks together, while applying a range of audio processing to blend the tracks together in a pleasant and aesthetically pleasing way. Music mixing practices require the pleasant combination of all aspects of a musical piece, using a set of engineering tools to do so.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Arel, I., Rose, D. C., & Karnowski, T. P. (2010). Deep machine learning-a new frontier in artificial intelligence research. IEEE Computational Intelligence Magazine, 5(4), 13–18.
Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., & Müller, K.-R. (2010). How to explain individual classification decisions. Journal of Machine Learning Research, 11(June), 1803–1831.
Baier, A. (1986). Trust and antitrust. Ethics, 96(2), 231–260.
Barchiesi, D., & Reiss, J. (2009). Automatic target mixing using least-squares optimization of gains and equalization settings. In Proceedings of the 12th Conference on Digital Audio Effects (DAFx-09), Como, Italy.
Barchiesi, D., & Reiss, J. (2010). Reverse engineering of a mix. Journal of the Audio Engineering Society, 58(7/8), 563–576.
Benito, A. L., & Reiss, J. D. (2017). Intelligent multitrack reverberation based on hinge-loss markov random fields. In Audio Engineering Society Conference: 2017 AES International Conference on Semantic Audio.
Berners-Lee, T., Hendler, J., Lassila, O., et al. (2001). The semantic web. Scientific American, 284(5), 28–37.
Birtchnell, T., & Elliott, A. (2018). Automating the black art: Creative places for artificial intelligence in audio mastering. Geoforum, 96, 77–86.
Bittner, R. M., Salamon, J., Tierney, M., Mauch, M., Cannam, C., & Bello, J. P. (2014). MedleyDB: A multitrack dataset for annotation-intensive MIR research. In 15th International Society for Music Information Retrieval Conference (ISMIR 2014).
Bolton, C., Machová, V., Kovacova, M., & Valaskova, K. (2018). The power of human-machine collaboration: Artificial intelligence, business automation, and the smart economy. Economics, Management, and Financial Markets, 13(4), 51–56.
Bromham, G. (2016). How can academic practice inform mix-craft? In R. Hepworth-Sawyer & J. Hodgson (Eds.), Mixing music. Perspective on music production, Chap. 16 (pp. 245–256). Taylor & Francis.
Bromham, G., Moffat, D., Barthet, M., Danielsen, A., & Fazekas, G. (2019). The impact of audio effects processing on the perception of brightness and warmth. In ACM Audio Mostly Conference, Nottingham, UK. ACM.
Bromham, G., Moffat, D., Fazekas, G., Barthet, M., & Sandler, M. B. (2018). The impact of compressor ballistics on the perceived style of music. In Audio Engineering Society Convention, New York, USA (Vol. 145).
Bruford, F., Barthet, M., McDonald, S., & Sandler, M. B. (2019). Groove explorer: An intelligent visual interface for drum loop library navigation. In 24th ACM Intelligent User Interfaces Conference (IUI).
Bull, R. (1997). The aesthetics of acid.
Burgess, R. J. (2014). The history of music production. Oxford University Press.
Cannam, C., Landone, C., & Sandler, M. (2010). Sonic visualiser: An open source application for viewing, analysing, and annotating music audio files. In Proceedings of the ACM Multimedia 2010 International Conference, Firenze, Italy (pp. 1467–1468).
Cartwright, M., & Pardo, B. (2014). Synthassist: An audio synthesizer programmed with vocal imitation. In Proceedings of the 22nd ACM International Conference on Multimedia (pp. 741–742).
Cartwright, M., Pardo, B., and Reiss, J. D. (2014). Mixploration: Rethinking the audio mixer interface. In International Conference on Intelligent User Interfaces.
Cascone, K. (2000). The aesthetics of failure: “Post-digital” tendencies in contemporary computer music. Computer Music Journal, 24(4), 12–18.
Castanedo, F. (2013). A review of data fusion techniques. The Scientific World Journal.
Chourdakis, E. T., & Reiss, J. D. (2016). Automatic control of a digital reverberation effect using hybrid models. In Audio Engineering Society Conference: 60th International Conference: DREAMS (Dereverberation and Reverberation of Audio, Music, and Speech). Audio Engineering Society.
Chourdakis, E. T., & Reiss, J. D. (2017). A machine-learning approach to application of intelligent artificial reverberation. Journal of the Audio Engineering Society, 65(1/2), 56–65.
Clifford, A., & Reiss, J. D. (2013). Using delay estimation to reduce comb filtering of arbitrary musical sources. Journal of the Audio Engineering Society, 61(11), 917–927.
Cohen, S. (1993). Ethnography and popular music studies. Popular Music, 12(2), 123–138.
Colonel, J., & Reiss, J. D. (2019). Exploring preference for multitrack mixes using statistical analysis of mir and textual features. In Audio Engineering Society Convention (Vol. 147). Audio Engineering Society.
Dawson, R. (2013). Cell: A generative music game. Master’s thesis, University of Sussex.
De Man, B., Boerum, M., Leonard, B., Massenburg, G., King, R., & Reiss, J. D. (2015). Perceptual evaluation of music mixing practices. In Audio Engineering Society Convention (Vol. 138).
De Man, B., Jillings, N., Moffat, D., Reiss, J. D., & Stables, R. (2016). Subjective comparison of music production practices using the Web Audio Evaluation Tool. In 2nd AES Workshop on Intelligent Music Production.
De Man, B., McNally, K., & Reiss, J. D. (2017). Perceptual evaluation and analysis of reverberation in multitrack music production. Journal of the Audio Engineering Society, 65(1/2), 108–116.
De Man, B., Mora-Mcginity, M., Fazekas, G., & Reiss, J. D. (2014). The Open Multitrack Testbed. In Audio Engineering Society Convention (Vol. 137).
De Man, B., & Reiss, J. D. (2013). A knowledge-engineered autonomous mixing system. In Audio Engineering Society Convention (Vol. 135). Audio Engineering Society.
De Man, B., & Reiss, J. D. (2013). A semantic approach to autonomous mixing. Journal on the Art of Record Production (JARP), 8.
Deruty, E. (2016). Goal-oriented mixing. In Proceedings of the 2nd AES Workshop on Intelligent Music Production.
Dugan, D. (1975). Automatic microphone mixing. Journal of the Audio Engineering Society, 23(6), 442–449.
Durr, G., Peixoto, L., Souza, M., Tanoue, R., & Reiss, J. D. (2015). Implementation and evaluation of dynamic level of audio detail. In Audio Engineering Society Conference: 56th International Conference: Audio for Games, London, UK.
Eno, B. (2004). The studio as compositional tool. In C. Cox & D. Warner (Eds.), Audio culture: Readings in modern music, Chap. 22 (pp. 127–130). Londres: Continum.
Eno, B. and Schmidt, P. (1975). Oblique strategies. Opal. (Limited edition, boxed set of cards.) [rMAB].
Ewert, S., & Sandler, M. (2016). Piano transcription in the studio using an extensible alternating directions framework. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 24(11), 1983–1997.
Farnell, A. (2007). An introduction to procedural audio and its application in computer games. In Audio Mostly Conference, Ilmenau, Germany (pp. 1–31). ACM.
Fenton, S. (2018). Automatic mixing of multitrack material using modified loudness models. In Audio Engineering Society Convention (Vol. 145). Audio Engineering Society.
Fenton, S., & Lee, H. (2015). Towards a perceptual model of “punch” in musical signals. In Audio Engineering Society Convention (Vol. 139). Audio Engineering Society.
Ford, J., Cartwright, M., & Pardo, B. (2015). Mixviz: A tool to visualize masking in audio mixes. In Audio Engineering Society Convention (Vol. 139). Audio Engineering Society.
Furman, J., & Seamans, R. (2019). AI and the economy. Innovation Policy and the Economy, 19(1), 161–191.
Gang, R., Bocko, G., Bocko, M. F., Headlam, D., Lundberg, J., & Ren, G. (2010). Automatic music production system employing probabilistic expert systems. In Audio Engineering Society Convention (Vol. 129). Audio Engineering Society.
Garcia, R. A. (2001). Automating the design of sound synthesis techniques using evolutionary methods. In Proceeding 4th International Conference on Digital Audio Effects (DAFx), Limerick, Ireland.
Gatys, L., Ecker, A. S., & Bethge, M. (2015). Texture synthesis using convolutional neural networks. Advances in Neural Information Processing Systems, 262–270.
Gelineck, S., & Overholt, D. (2015). Haptic and visual feedback in 3d audio mixing interfaces. In Proceedings of Audio Mostly Conference on Interaction With Sound (p. 14). ACM.
Gillet, O., & Richard, G. (2006). ENST-drums: An extensive audio-visual database for drum signals processing. In Proceedings of the 7th International Society for Music Information Retrieval Conference (ISMIR-06) (pp. 156–159).
Glaser, B., & Strauss, A. (1967). Grounded theory: The discovery of grounded theory. Sociology the Journal of the British Sociological Association, 12(1), 27–49.
Grinstein, E., Duong, N. Q., Ozerov, A., & Pérez, P. (2018). Audio style transfer. In International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 586–590). IEEE.
Hafezi, S., & Reiss, J. D. (2015). Autonomous multitrack equalization based on masking reduction. Journal of the Audio Engineering Society, 63(5), 312–323.
Hargreaves, S. (2014). Music metadata capture in the studio from audio and symbolic data. Ph.D. thesis, Queen Mary University of London.
Hargreaves, S., Klapuri, A., & Sandler, M. B. (2012). Structural segmentation of multitrack audio. IEEE Transactions on Audio, Speech, and Language Processing, 20(10), 2637–2647.
Hawley, S. H., Colburn, B., & Mimilakis, S. I. (2019). Signaltrain: Profiling audio compressors with deep neural networks. arXiv:1905.11928.
Hermes, K. (2019). Enhancing creativity through automatic mixing research: Testing spectral clarity predictors in the mixprocess. The Art of Record Production.
Hestermann, S., & Deffner, N. (2020). Enhanced de-essing via neural networks. In Machine Learning and Knowledge Discovery in Databases: International Workshops of ECML PKDD 2019, Würzburg, Germany, September 16–20, 2019, Proceedings, Part II (pp. 537–542). Springer.
Holfelt, J., Csapo, G., Andersson, N., Zabetian, S., Castenieto, M., Dahl, S., et al. (2017). Extraction, mapping, and evaluation of expressive acoustic features for adaptive digital audio effects. In Proceedings of the Sound & Music Computing Conference.
Hsu, T.-h., Liu, J.-F., Yu, P.-N., Lee, W.-S., & Hsu, J.-S. (2008). Development of an automatic parking system for vehicle. In 2008 IEEE Vehicle Power and Propulsion Conference (pp. 1–6). IEEE.
Izhaki, R. (2008). Mixing audio: Concepts, practices and tools. Focal Press.
Izotope. (2020). Neutron plugin. Retrieved from https://www.izotope.com/en/learn/unmasking-your-mix-with-neutron.html.
Jazzomat. (2018). Jazzomat dataset, the jazzomat research project. Retrieved May 26, 2020, from http://jazzomat.hfm-weimar.de/dbformat/dbcontent.html.
Jillings, N., & Stables, R. (2017). Automatic channel routing using musical instrument linked data. In 3rd AES Workshop on Intelligent Music Production.
Jillings, N., & Stables, R. (2017). Automatic masking reduction in balance mixes using evolutionary computing. In Audio Engineering Society Convention (Vol. 143). Audio Engineering Society.
Jun, S., Kim, D., Jeon, M., Rho, S., & Hwang, E. (2015). Social mix: Automatic music recommendation and mixing scheme based on social network analysis. The Journal of Supercomputing, 71(6), 1933–1954.
Katz, B., & Katz, R. A. (2003). Mastering audio: The art and the science. Butterworth-Heinemann.
King, A. (2015). Technology as a vehicle (tool and practice) for developing diverse creativities. In P. Burnard & E. Haddon (Eds.), Activating diverse musical creativities: Teaching and learning in higher music education, Chap. 11 (pp. 203–222). Bloomsbury Publishing.
Kolasinski, B. (2008). A framework for automatic mixing using timbral similarity measures and genetic optimization. In Audio Engineering Society Convention (Vol. 124).
Lefford, M. N., Bromham, G., Fazekas, G., & Moffat, D. (2021). Context aware intelligent mixing systems. Journal of the Audio Engineering Society, 69(3), 128–141. https://doi.org/10.17743/jaes.2020.0043
Lefford, M. N., Bromham, G., & Moffat, D. (2020). Mixing with intelligent mixing systems: Evolving practices and lessons from computer assisted design. In Audio Engineering Society Convention, Vienna, Austria (Vol. 148).
Lindsay, R. K., Buchanan, B. G., Feigenbaum, E. A., & Lederberg, J. (1993). DENDRAL: A case study of the first expert system for scientific hypothesis formation. Artificial Intelligence, 61(2), 209–261.
Ma, Z., Reiss, J. D., & Black, D. A. (2013). Implementation of an intelligent equalization tool using yule-walker for music mixing and mastering. In Audio Engineering Society Convention (Vol. 134). Audio Engineering Society.
Maddams, J. A., Finn, S., & Reiss, J. D. (2012). An autonomous method for multi-track dynamic range compression. In Proceedings of the 15th International Conference on Digital Audio Effects (DAFx-12) (pp. 1–8).
Martínez Ramírez, M. A., Benetos, E., & Reiss, J. D. (2019). A general-purpose deep learning approach to model time-varying audio effects.
Martínez Ramírez, M. A., Benetos, E., & Reiss, J. D. (2020). Deep learning for black-box modeling of audio effects. Applied Sciences, 10(2), 638.
Martínez Ramírez, M. A., & Reiss, J. D. (2017). Deep learning and intelligent audio mixing. In Proceedings of the 3rd Workshop on Intelligent Music Production.
Martínez Ramírez, M. A., & Reiss, J. D. (2017). Stem audio mixing as a content-based transformation of audio features. In 19th IEEE Workshop on Multimedia Signal Processing (MMSP) (pp. 1–6). IEEE.
Martínez Ramírez, M. A., & Reiss, J. D. (2019). Modeling nonlinear audio effects with end-to-end deep neural networks. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 171–175). IEEE.
Martínez Ramírez, M. A., Stoller, D., & Moffat, D. (2021). A deep learning approach to intelligent drum mixing with the wave-u-net. Journal of the Audio Engineering Society, 69(3), 142–151. https://doi.org/10.17743/jaes.2020.0031.
Mat, M., Yassin, I. M., Taib, M. N., Zabidi, A., Hassan, H. A., & Tahir, N. M. (2010). Adaptive filter based on narx model for recorded audio noise removal. In 2010 IEEE Control and System Graduate Research Colloquium (ICSGRC 2010) (pp. 26–32). IEEE.
Mauch, M., MacCallum, R. M., Levy, M., & Leroi, A. M. (2015). The evolution of popular music: USA 1960–2010. Royal Society Open Science, 2(5).
McDermott, J. H., & Simoncelli, E. P. (2011). Sound texture perception via statistics of the auditory periphery: evidence from sound synthesis. Neuron, 71(5), 926–940.
McGarry, G., Tolmie, P., Benford, S., Greenhalgh, C., & Chamberlain, A. (2017). “They’re all going out to something weird” workflow, legacy and metadata in the music production process. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 995–1008).
McGrath, S., & Love, S. (2017). The user experience of mobile music making: An ethnographic exploration of music production and performance in practice. Computers in Human Behavior, 72, 233–245.
Mengual, L., Moffat, D., & Reiss, J. D. (2016). Modal synthesis of weapon sounds. In Proceeding Audio Engineering Society Conference: 61st International Conference: Audio for Games, London. Audio Engineering Society.
Merchel, S., Altinsoy, M. E., & Stamm, M. (2012). Touch the sound: Audio-driven tactile feedback for audio mixing applications. Journal of the Audio Engineering Society, 60(1/2), 47–53.
Mimilakis, S. I., Cano, E., Abeßer, J., & Schuller, G. (2016). New sonorities for jazz recordings: Separation and mixing using deep neural networks. In 2nd AES Workshop on Intelligent Music Production (WIMP) (Vol. 13).
Mimilakis, S. I., Drossos, K., Virtanen, T., & Schuller, G. (2016). Deep neural networks for dynamic range compression in mastering applications. In Audio Engineering Society Convention (Vol. 140).
Miranda, E. R., & Al Biles, J. (2007). Evolutionary computer music. Springer.
Moffat, D., & Reiss, J. D. (2016). Dereverberation and its application to the blind source separation problem. In Proceeding Audio Engineering Society Conference: 60th International Conference: DREAMS (Dereverberation and Reverberation of Audio, Music, and Speech). Audio Engineering Society.
Moffat, D., & Reiss, J. D. (2018). Objective evaluations of synthesised environmental sounds. In Proceeding 21th International Conference on Digital Audio Effects (DAFx-17), Aveiro, Portugal.
Moffat, D., & Reiss, J. D. (2020). Semantic music production. Journal of the Audio Engineering Society (in press).
Moffat, D., Ronan, D., & Reiss, J. D. (2015). An evaluation of audio feature extraction toolboxes. In Proceeding 18th International Conference on Digital Audio Effects (DAFx-15), Trondheim, Norway.
Moffat, D., Ronan, D., & Reiss, J. D. (2017). Unsupervised taxonomy of sound effects. In Proceeding 20th International Conference on Digital Audio Effects (DAFx-17), Edinburgh, UK.
Moffat, D., & Sandler, M. B. (2018). Adaptive ballistics control of dynamic range compression for percussive tracks. In Audio Engineering Society Convention, New York, USA (Vol. 145).
Moffat, D., & Sandler, M. B. (2019). Approaches in intelligent music production. Arts, 8(4), 125.
Moffat, D., & Sandler, M. B. (2019). An automated approach to the application of reverberation. In Audio Engineering Society Convention, New York, USA (Vol. 147).
Moffat, D., & Sandler, M. B. (2019). Automatic mixing level balancing enhanced through source interference identification. In Audio Engineering Society Convention, Dublin, Ireland (Vol. 146).
Moffat, D., & Sandler, M. B. (2019). Machine learning multitrack gain mixing of drums. In Audio Engineering Society Convention, New York, USA (Vol. 147).
Moffat, D., Thalmann, F., & Sandler, M. B. (2018). Towards a semantic web representation and application of audio mixing rules. In Proceedings of the 4th Workshop on Intelligent Music Production (WIMP), Huddersfield, UK.
Moore, A. (2020). Dynamic range compression and the semantic descriptor aggressive. Applied Sciences, 10(7), 2350.
Muir, B. M. (1994). Trust in automation: Part I. theoretical issues in the study of trust and human intervention in automated systems. Ergonomics, 37(11):1905–1922.
Naylor, P. A., & Gaubitch, N. D. (2010). Speech dereverberation. Springer Science & Business Media.
Nelson, W. R., et al. (1982). Reactor: An expert system for diagnosis and treatment of nuclear reactor accidents (pp. 296–301). Pittsburgh, PA: AAAI.
Nercessian, S., & Lukin, A. (2019). Speech dereverberation using recurrent neural networks. In Proceeding 23rd International Conference on Digital Audio Effects (DAFx-19), Birmingham, UK.
Owsinski, B. (2013). The mixing engineer’s handbook. Nelson Education.
Pachet, F., & Delerue, O. (2000). On-the-fly multi-track mixing. In Audio Engineering Society Convention (Vol. 109). Audio Engineering Society.
Palladini, A. (2018). Intelligent audio machines. In Keynote Talk at 4th Workshop on Intelligent Music Production (WIMP-18).
Pardo, B., Rafii, Z., & Duan, Z. (2018). Audio source separation in a musical context. In Springer handbook of systematic musicology (pp. 285–298). Springer.
Paterson, J. (2011). The preset is dead; long live the preset. In Audio Engineering Society Convention (Vol. 130). Audio Engineering Society.
Pauwels, J., O’Hanlon, K., Fazekas, G., & Sandler, M. B. (2017). Confidence measures and their applications in music labelling systems based on hidden markov models. ISMIR, 279–285.
Perez Gonzalez, E., & Reiss, J. (2009). Automatic equalization of multichannel audio using cross-adaptive methods. In Audio Engineering Society Convention (Vol. 127). Audio Engineering Society.
Perez Gonzalez, E., & Reiss, J. (2009). Automatic gain and fader control for live mixing. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA’09) (pp. 1–4).
Perez Gonzalez, E., & Reiss, J. D. (2008). Improved control for selective minimization of masking using interchannel dependancy effects. In 11th International Conference on Digital Audio Effects (DAFx) (p. 12).
Perez Gonzalez, E., & Reiss, J. D. (2010). A real-time semiautonomous audio panning system for music mixing. EURASIP Journal on Advances in Signal Processing, 2010(1).
Pestana, P., & Reiss, J. (2014). Intelligent audio production strategies informed by best practices. In Audio Engineering Society Conference: 53rd International Conference: Semantic Audio. Audio Engineering Society.
Pestana, P. D., Ma, Z., Reiss, J. D., Barbosa, A., & Black, D. A. (2013). Spectral characteristics of popular commercial recordings 1950–2010. In Audio Engineering Society Convention (Vol. 135). Audio Engineering Society.
Pestana, P. D., Reiss, J. D., & Barbosa, A. (2015). Cross-adaptive polarity switching strategies for optimization of audio mixes. In Audio Engineering Society Convention (Vol. 138).
Pestana, P. D. L. G. (2013). Automatic mixing systems using adaptive digital audio effects. Ph.D. thesis, Universidade Católica Portuguesa.
Pras, A., De Man, B., & Reiss, J. D. (2018). A case study of cultural influences on mixing practices. In Audio Engineering Society Convention (Vol. 144).
Pras, A., & Guastavino, C. (2011). The role of music producers and sound engineers in the current recording context, as perceived by young professionals. Musicae Scientiae, 15(1), 73–95.
Pras, A., Guastavino, C., & Lavoie, M. (2013). The impact of technological advances on recording studio practices. Journal of the American Society for Information Science and Technology, 64(3), 612–626.
Prior, N. (2010). The rise of the new amateurs: Popular music, digital technology, and the fate of cultural production. In J. R. Hall, L. Grindstaff, & M.-C. Lo (Eds.), Routledge handbook of cultural sociology (pp. 398–407). Routledge.
Prior, N. (2012). Digital formations of popular music. Réseaux, 2, 66–90.
Rasmussen, A. N. (1990). The inco expert system project: Clips in shuttle mission control. In First CLIPSConference (p. 305).
Reed, D. (2000). A perceptual assistant to do sound equalization. In Proceedings of the 5th International Conference on Intelligent User Interfaces (pp. 212–218). ACM.
Reiss, J., & Brandtsegg, Ø. (2018). Applications of cross-adaptive audio effects: Automatic mixing, live performance and everything in between. Frontiers in Digital Humanities, 5, 17.
Reiss, J. D. (2011). Intelligent systems for mixing multichannel audio. In Proceeding 17th International Conference on Digital Signal Processing (DSP) (pp. 1–6). IEEE.
Reiss, J. D. (2018). Do you hear what I hear? The science of everyday sounds. Professorial: Inaugural Lecture Queen Mary University.
Rödel, C., Stadler, S., Meschtscherjakov, A., & Tscheligi, M. (2014). Towards autonomous cars: The effect of autonomy levels on acceptance and user experience. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 1–8). ACM.
Ronan, D., De Man, B., Gunes, H., & Reiss, J. D. (2015). The impact of subgrouping practices on the perception of multitrack music mixes. In Audio Engineering Society Convention (Vol. 139).
Ronan, D., Gunes, H., & Reiss, J. D. (2017). Analysis of the subgrouping practices of professional mix engineers. In Audio Engineering Society Convention (Vol. 142).
Ronan, D., Ma, Z., Mc Namara, P., Gunes, H., & Reiss, J. D. (2018). Automatic minimisation of masking in multitrack audio using subgroups.
Ronan, D., Moffat, D., Gunes, H., & Reiss, J. D. (2015). Automatic subgrouping of multitrack audio. In Proceeding 18th International Conference on Digital Audio Effects (DAFx-15), Trondheim, Norway.
Ronan, D., Reiss, J. D., & Gunes, H. (2018). An empirical approach to the relationship between emotion and music production quality. arXiv:1803.11154.
Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach (3rd ed.). Pearson Education Limited.
Sandler, M., De Roure, D., Benford, S., & Page, K. (2019). Semantic web technology for new experiences throughout the music production-consumption chain. In 2019 International Workshop on Multilayer Music Representation and Processing (MMRP) (pp. 49–55). IEEE.
Sarkar, S., Reiss, J. D., & Brandtsegg, Ø. (2017). Investigation of a drum controlled cross-adaptive audio effect for live performance. In Proceeding 20th International Conference on Digital Audio Effects (DAFx-17), Edinburgh, UK.
Sauer, C., Roth-Berghofer, T., Auricchio, N., & Proctor, S. (2013). Recommending audio mixing workflows. In International Conference on Case-Based Reasoning (pp. 299–313). Springer.
Scherer, K. R., Zentner, M. R., et al. (2001). Emotional effects of music: Production rules. Music and Emotion: Theory and Research, 361(2001), 392.
Schmidt, B. (2003). Interactive mixing of game audio. In Audio Engineering Society Convention (Vol. 115). Audio Engineering Society.
Schwarz, D., Cahen, R., Jacquemin, C., & Ding, H. (2011). Sound level of detail in interactive audiographic 3d scenes. In International Computer Music Conference (ICMC), Huddersfield, UK.
Seetharaman, P., & Pardo, B. (2016). Audealize: Crowdsourced audio production tools. Journal of the Audio Engineering Society, 64(9), 683–695.
Selfridge, R., Moffat, D., Avital, E. J., & Reiss, J. D. (2018). Creating real-time aeroacoustic sound effects using physically informed models. Journal of the Audio Engineering Society, 66(7/8), 594–607.
Senior, M. (2011). Mixing secrets for the small studio: Additional resources.
Senior, M. (2012). Mixing Secrets. Focal Press.
Sheng, D., & Fazekas, G. (2019). A feature learning siamese model for intelligent control of the dynamic range compressor. In 2019 International Joint Conference on Neural Networks (IJCNN) (pp. 1–8). IEEE.
Sheridan, T. B., & Verplank, W. L. (1978). Human and computer control of undersea teleoperators. Technical report, Massachusetts Inst of Tech Cambridge Man-Machine Systems Lab.
Stables, R., Enderby, S., De Man, B., Fazekas, G., & Reiss, J. D. (2014). SAFE: A system for the extraction and retrieval of semantic audio descriptors. In 15th International Society for Music Information Retrieval Conference (ISMIR 2014).
Stasis, S., Jillings, N., Enderby, S., & Stables, R. (2017). Audio processing chain recommendation. In Proceedings of the 20th International Conference on Digital Audio Effects, Edinburgh, UK.
Stasis, S., Jillings, N., Enderby, S., & Stables, R. (2017). Audio processing chain recommendation using semantic cues. In 3rd Workshop on Intelligent Music Production.
Stevens, R., & Raybould, D. (2013). The game audio tutorial: A practical guide to creating and implementing sound and music for interactive games. Routledge.
Terrell, M., & Reiss, J. D. (2009). Automatic monitor mixing for live musical performance. Journal of the Audio Engineering Society, 57(11), 927–936.
Terrell, M., Reiss, J. D., & Sandler, M. (2010). Automatic noise gate settings for drum recordings containing bleed from secondary sources. EURASIP Journal on Advances in Signal Processing, 2010, 10.
Terrell, M., & Sandler, M. (2012). An offline, automatic mixing method for live music, incorporating multiple sources, loudspeakers, and room effects. Computer Music Journal, 36(2), 37–54.
Terrell, M., Simpson, A., & Sandler, M. (2014). The mathematics of mixing. Journal of the Audio Engineering Society, 62(1/2), 4–13.
Toulson, R. (2016). The dreaded mix sign-off: Handing over to mastering. In Mixing Music (pp. 257–269). Routledge.
Tsilfidis, A., Papadakos, C., & Mourjopoulos, J. (2009). Hierarchical perceptual mixing. In Audio Engineering Society Convention (Vol. 126).
Van Waterschoot, T., & Moonen, M. (2011). Fifty years of acoustic feedback control: State of the art and future challenges. Proceedings of the IEEE, 99(2), 288–327.
Verfaille, V., & Arfib, D. (2002). Implementation strategies for adaptive digital audio effects. In Proceeding 5th International Conference on Digital Audio Effects (DAFx-02), Hamburg (pp. 21–26).
Verfaille, V., Arfib, D., Keiler, F., Knesebeck, A. v. d., & Zölter, U. (2011). Adaptive digital audio effects. In U. Zölter (Ed.), DAFX—Digital Audio Effects, Chap. 9 (pp. 321–391). Wiley.
Verma, P., & Smith, J. O. (2018). Neural style transfer for audio spectograms. In 31st Conference on Neural Information Processing Systems (NIPS 2017). Workshop for Machine Learning for Creativity and Design, Long Beach, CA, USA.
Vickers, E. (2010). The loudness war: Background, speculation, and recommendations. In Audio Engineering Society Convention (Vol. 129). Audio Engineering Society.
Virtanen, T., Gemmeke, J. F., Raj, B., & Smaragdis, P. (2015). Compositional models for audio processing: Uncovering the structure of sound mixtures. IEEE Signal Processing Magazine, 32(2), 125–144.
Ward, D., Reiss, J. D., & Athwal, C. (2012). Multitrack mixing using a model of loudness and partial loudness. In Audio Engineering Society Convention (Vol. 133).
Weaver, J., Barthet, M., & Chew, E. (2018). Analysis of piano duo tempo changes in varying convolution reverberation conditions. In Audio Engineering Society Convention (Vol. 145). Audio Engineering Society.
Weaver, J., Barthet, M., & Chew, E. (2019). Filling the space: The impact of convolution reverberation time on note duration and velocity in duet performance. In Audio Engineering Society Convention (Vol. 147). Audio Engineering Society.
Wichern, G., Wishnick, A., Lukin, A., & Robertson, H. (2015). Comparison of loudness features for automatic level adjustment in mixing. In Audio Engineering Society Convention (Vol. 139).
Wilmering, T., Fazekas, G., & Sandler, M. B. (2016). AUFX-O: Novel methods for the representation of audio processing workflows. In International Semantic Web Conference.
Wilmering, T., Moffat, D., Milo, A., & Sandler, M. B. (2020). A history of audio effects. Applied Sciences, 10(3), 791.
Wilson, A., & Fazenda, B. (2015). 101 mixes: A statistical analysis of mix-variation in a dataset of multi-track music mixes. In Audio Engineering Society Convention (Vol. 139). Audio Engineering Society.
Wilson, A., & Fazenda, B. M. (2013). Perception & evaluation of audio quality in music production. In Proceedings of the 16th International Conference on Digital Audio Effects (DAFx-13).
Wilson, A., & Fazenda, B. M. (2016). An evolutionary computation approach to intelligent music production informed by experimentally gathered domain knowledge. In Proceedings of the 2nd Audio Engineering Society Workshop on Intelligent Music Production (WIMP), London, UK.
Wilson, A., Loughran, R., & Fazenda, B. M. (2017). On the suitability of evolutionary computing to developing tools for intelligent music production. In 3rd Workshop on Intelligent Music Production (WIMP), Salford, UK.
Witten, I. H., Frank, E., Hall, M. A., & Pal, C. J. (2016). Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann.
Zheng, T., Seetharaman, P., & Pardo, B. (2016). SocialFX: Studying a crowdsourced folksonomy of audio effects terms. In Proceedings of the 24th ACM International Conference on Multimedia (pp. 182–186).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Moffat, D. (2021). AI Music Mixing Systems. In: Miranda, E.R. (eds) Handbook of Artificial Intelligence for Music. Springer, Cham. https://doi.org/10.1007/978-3-030-72116-9_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-72116-9_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-72115-2
Online ISBN: 978-3-030-72116-9
eBook Packages: Computer ScienceComputer Science (R0)