skip to main content
survey
Free Access
Just Accepted

Creativity and Machine Learning: A Survey

Online AM:11 May 2024Publication History
Skip Abstract Section

Abstract

There is a growing interest in the area of machine learning and creativity. This survey presents an overview of the history and the state of the art of computational creativity theories, key machine learning techniques (including generative deep learning), and corresponding automatic evaluation methods. After presenting a critical discussion of the key contributions in this area, we outline the current research challenges and emerging opportunities in this field.

References

  1. Panos Achlioptas, Maks Ovsjanikov, Kilichbek Haydarov, Mohamed Elhoseiny, and Leonidas Guibas. 2021. ArtEmis: Affective Language for Art. In Proc. of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’21).Google ScholarGoogle ScholarCross RefCross Ref
  2. Sergio Agnoli, Laura Franchin, Enrico Rubaltelli, and Giovanni Emanuele Corazza. 2019. The Emotionally Intelligent Use of Attention and Affective Arousal under Creative Frustration and Creative Success. Personality and Individual Differences 142 (2019), 242–248.Google ScholarGoogle ScholarCross RefCross Ref
  3. Andrea Agostinelli, Timo I. Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, Matt Sharifi, Neil Zeghidour, and Christian Frank. 2023. MusicLM: Generating Music From Text. (2023). arXiv:2301.11325Google ScholarGoogle Scholar
  4. Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong. 2021. VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text. In Advances in Neural Information Processing Systems (NeurIPS’21).Google ScholarGoogle Scholar
  5. Andrei G. Aleinikov, Sharon Kackmeister, and Ron Koenig. 2000. Creating Creativity: 101 Definitions (what Webster Never Told You). Alden B. Dow Creativity Center Press, Midland, MI.Google ScholarGoogle Scholar
  6. Teresa M. Amabile. 1983. The Social Psychology of Creativity: A Componential Conceptualization. Journal of Personality and Social Psychology 45, 2(1983), 357–376.Google ScholarGoogle ScholarCross RefCross Ref
  7. Jyoti Aneja, Alexander G. Schwing, Jan Kautz, and Arash Vahdat. 2021. A Contrastive Learning Approach for Training Variational Autoencoder Priors. In Advances in Neural Information Processing Systems (NeurIPS’21).Google ScholarGoogle Scholar
  8. Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lucic, and Cordelia Schmid. 2021. ViViT: A Video Vision Transformer. In Proc. of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV’21).Google ScholarGoogle ScholarCross RefCross Ref
  9. Charles Babbage. 1864. Of the Analytical Engine. In Passages from the Life of a Philosopher. Vol.  3. Longman, Green, Longman, Roberts, & Green, 112–141.Google ScholarGoogle Scholar
  10. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. An Actor-Critic Algorithm for Sequence Prediction. In Proc. of the 5th International Conference on Learning Representations (ICLR’17).Google ScholarGoogle Scholar
  11. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proc. of the 3rd International Conference on Learning Representations (ICLR’15).Google ScholarGoogle Scholar
  12. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez,..., and Jared Kaplan. 2022. Constitutional AI: Harmlessness from AI Feedback. (2022). arXiv:2212.08073Google ScholarGoogle Scholar
  13. Pierre Baldi and Laurent Itti. 2010. Of Bits and Wows: A Bayesian Theory of Surprise with Applications to Attention. Neural Networks 23(2010), 649–666.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Hangbo Bao, Li Dong, and Furu Wei. 2022. BEiT: BERT Pre-Training of Image Transformers. In Proc. of the 10th International Conference on Learning Representations (ICLR’22).Google ScholarGoogle Scholar
  15. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning Long-Term Dependencies with Gradient Descent is Difficult. IEEE Transactions on Neural Networks 5, 2 (1994), 157–166.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Walter Benjamin. 2008. The Work of Art in the Age of Mechanical Reproduction. Penguin Books Ltd, London, UK.Google ScholarGoogle Scholar
  17. Daniel E. Berlyne. 1971. Aesthetics and Psychobiology. Appleton-Century-Crofts, New York, NY.Google ScholarGoogle Scholar
  18. Sebastian Berns and Simon Colton. 2020. Bridging Generative Deep Learning and Computational Creativity. In Proc. of the 11th International Conference on Computational Creativity (ICCC’20).Google ScholarGoogle Scholar
  19. James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, Wesam Manassra, Prafulla Dhariwal, Casey Chu, Yunxin Jiao, and Aditya Ramesh. 2024. Improving Image Generation with Better Captions. (2024). Retrieved April 30, 2024 from https://cdn.openai.com/papers/dall-e-3.pdfGoogle ScholarGoogle Scholar
  20. Federico Betti, Giorgia Ramponi, and Massimo Piccardi. 2020. Controlled Text Generation with Adversarial Learning. In Proc. of the 13th International Conference on Natural Language Generation (INLG’20).Google ScholarGoogle ScholarCross RefCross Ref
  21. Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. 2023. Training Diffusion Models with Reinforcement Learning. In ICML’23 Workshop on Efficient Systems for Foundation Models.Google ScholarGoogle Scholar
  22. Margaret A. Boden. 2003. The Creative Mind: Myths and Mechanisms. Routledge, London, UK.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Rishi Bommasani, Drew Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney Arx, Michael Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Davis, Dora Demszky, and Percy Liang. 2021. On the Opportunities and Risks of Foundation Models. (2021). arXiv:2108.07258Google ScholarGoogle Scholar
  24. Sam Bond-Taylor, Adam Leach, Yang Long, and Chris G. Willcocks. 2021. Deep Generative Modelling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 11(2021), 7327–7347.Google ScholarGoogle ScholarCross RefCross Ref
  25. Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Olivier Teboul, David Grangier, Marco Tagliasacchi, and Neil Zeghidour. 2022. AudioLM: a Language Modeling Approach to Audio Generation. (2022). arXiv:2209.03143Google ScholarGoogle Scholar
  26. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating Sentences from a Continuous Space. In Proc. of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNNL’16).Google ScholarGoogle ScholarCross RefCross Ref
  27. Oliver Bown. 2021. Beyond the Creative Species. The MIT Press, Cambridge, MA.Google ScholarGoogle Scholar
  28. Herbie Bradley, Andrew Dai, Hannah Teufel, Jenny Zhang, Koen Oostermeijer, Marco Bellagente, Jeff Clune, Kenneth Stanley, Grégory Schott, and Joel Lehman. 2024. Quality-Diversity through AI Feedback. In Proc. of the 12th International Conference on Learning Representations (ICLR’24).Google ScholarGoogle Scholar
  29. Selmer Bringsjord, Paul Bello, and David Ferrucci. 2001. Creativity, the Turing Test, and the (Better) Lovelace Test. Minds and Machines 11(2001), 3–27.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Terence Broad, Sebastian Berns, Simon Colton, and Mick Grierson. 2021. Active Divergence with Generative Deep Learning - A Survey and Taxonomy. In Proc. of the 12th International Conference on Computational Creativity (ICCC’21).Google ScholarGoogle Scholar
  31. Andrew Brock, Jeff Donahue, and Karen Simonyan. 2018. Large Scale GAN Training for High Fidelity Natural Image Synthesis. In Proc. of the 7th International Conference on Learning Representations (ICLR’19).Google ScholarGoogle Scholar
  32. Tim Brooks, Janne Hellsten, Miika Aittala, Ting-Chun Wang, Timo Aila, Jaakko Lehtinen, Ming-Yu Liu, Alexei Efros, and Tero Karras. 2022. Generating Long Videos of Dynamic Scenes. In Advances in Neural Information Processing Systems (NeurIPS’22).Google ScholarGoogle Scholar
  33. Tim Brooks, Bill Peebles, Connor Homes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Wing Yin Ng, Ricky Wang, and Aditya Ramesh. 2024. Video generation models as world simulators. (2024). Retrieved April 30, 2024 from https://openai.com/research/video-generation-models-as-world-simulatorsGoogle ScholarGoogle Scholar
  34. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter,..., and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems (NeurIPS’20).Google ScholarGoogle Scholar
  35. Razvan C. Bunescu and Oseremen O. Uduehi. 2019. Learning to Surprise: A Composer-Audience Architecture. In Proc. of the 10th International Conference on Computational Creativity (ICCC’19).Google ScholarGoogle Scholar
  36. Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, and Alexei A. Efros. 2019. Large-Scale Study of Curiosity-Driven Learning. In Proc. of the 7th International Conference on Learning Representations (ICLR’19).Google ScholarGoogle Scholar
  37. Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. 2016. Importance Weighted Autoencoders. In Proc. of the 4th International Conference on Learning Representations (ICLR’16).Google ScholarGoogle Scholar
  38. Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. 2018. Understanding Disentangling in β-VAE. (2018). arXiv:1804.03599Google ScholarGoogle Scholar
  39. Kevin Burns. 2006. Atoms of EVE’: A Bayesian Basis for Esthetic Analysis of Style in Sketching. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 20(2006), 185–199.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Kevin Burns. 2015. Computing the Creativeness of Amusing Advertisements: A Bayesian Model of Burma-Shave’s Muse. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 29(2015), 109–128.Google ScholarGoogle ScholarCross RefCross Ref
  41. Amílcar Cardoso, Tony Veale, and Geraint A. Wiggins. 2009. Converging on the Divergent: The History (and Future) of the International Joint Workshops in Computational Creativity. AI Magazine 30, 3 (2009), 15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar,..., and Dylan Hadfield-Menell. 2023. Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. (2023). Transactions on Machine Learning Research.Google ScholarGoogle Scholar
  43. Tuhin Chakrabarty, Vishakh Padmakumar, and He He. 2023. Help me write a poem: Instruction Tuning as a Vehicle for Collaborative Poetry Writing. In Proc. of the AAAI’23 Workshop on Creative AI Across Modalities.Google ScholarGoogle Scholar
  44. Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. 2020. Generative Pretraining From Pixels. In Proc. of the 37th International Conference on Machine Learning (ICML’20).Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray,..., and Wojciech Zaremba. 2021. Evaluating Large Language Models Trained on Code. (2021). arXiv:2107.03374Google ScholarGoogle Scholar
  46. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. In Advances in Neural Information Processing Systems (NeurIPS’16).Google ScholarGoogle Scholar
  47. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating Long Sequences with Sparse Transformers. (2019). arXiv:1904.10509Google ScholarGoogle Scholar
  48. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep Reinforcement Learning from Human Preferences. In Advances in Neural Information Processing Systems (NeurIPS’17).Google ScholarGoogle Scholar
  49. Eric Chu. 2018. Artistic Influence GAN. In Proc. of the NeurIPS’18 Workshop on Machine Learning for Creativity and Design.Google ScholarGoogle Scholar
  50. Elizabeth Clark, Yangfeng Ji, and Noah A. Smith. 2018. Neural Text Generation in Stories Using Entity Representations as Context. In Proc. of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers).Google ScholarGoogle ScholarCross RefCross Ref
  51. Harold Cohen. 1988. How to Draw Three People in a Botanical Garden. In Proc. of the 7th AAAI National Conference on Artificial Intelligence (AAAI’88).Google ScholarGoogle Scholar
  52. Simon Colton. 2008. Creativity Versus the Perception of Creativity in Computational Systems. In Proc. of the 2008 AAAI Spring Symposium.Google ScholarGoogle Scholar
  53. Simon Colton. 2012. The Painting Fool: Stories from Building an Automated Painter. In Computers and Creativity. Springer, Berlin, Heidelberg, 3–38.Google ScholarGoogle Scholar
  54. Simon Colton, John William Charnley, and Alison Pease. 2011. Computational Creativity Theory: The FACE and IDEA Descriptive Models. In Proc. of the 2nd International Conference on Computational Creativity (ICCC’11).Google ScholarGoogle Scholar
  55. Simon Colton, Jakob Halskov, Dan Ventura, Ian Gouldstone, Michael Cook, and Blanca Pérez-Ferrer. 2015. The Painting Fool Sees! New Projects with the Automated Painter. In Proc. of the 6th International Conference on Computational Creativity (ICCC’15).Google ScholarGoogle Scholar
  56. Simon Colton and Geraint A. Wiggins. 2012. Computational Creativity: The Final Frontier?. In Proc. of the 20th European Conference on Artificial Intelligence (ECAI’12).Google ScholarGoogle Scholar
  57. David Cope. 1989. Experiments in Musical Intelligence (EMI): Non‐Linear Linguistic‐Based Composition. Interface 18(1989), 117–139.Google ScholarGoogle Scholar
  58. Katherine Crowson, Stella Biderman, Daniel Kornis, Dashiell Stander, Eric Hallahan, Louis Castricato, and Edward Raff. 2022. VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance. In Proc. of the 17th European Conference on Computer Vision (ECCV’22).Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and Play Language Models: A Simple Approach to Controlled Text Generation. In Proc. of the 8th International Conference on Learning Representations (ICLR’20).Google ScholarGoogle Scholar
  60. Yashar Deldjoo, Tommaso Di Noia, and Felice Antonio Merra. 2021. A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks. Comput. Surveys 54, 2 (2021), 1–38.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proc. of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).Google ScholarGoogle Scholar
  62. Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever. 2020. Jukebox: A Generative Model for Music. (2020). arXiv:2005.00341Google ScholarGoogle Scholar
  63. Prafulla Dhariwal and Alexander Quinn Nichol. 2021. Diffusion Models Beat GANs on Image Synthesis. In Advances in Neural Information Processing Systems (NeurIPS’21).Google ScholarGoogle Scholar
  64. Li Ding, Jenny Zhang, Jeff Clune, Lee Spector, and Joel Lehman. 2023. Quality Diversity through Human Feedback. In Proc. of the NeurIPS’23 Workshop ALOE.Google ScholarGoogle Scholar
  65. Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, and Jie Tang. 2021. CogView: Mastering Text-to-Image Generation via Transformers. In Advances in Neural Information Processing Systems (NeurIPS’21).Google ScholarGoogle Scholar
  66. Chris Donahue, Julian McAuley, and Miller Puckette. 2019. Adversarial Audio Synthesis. In Proc. of the 7th International Conference on Learning Representations (ICLR’19).Google ScholarGoogle Scholar
  67. Jeff Donahue, Philipp Krahenbuhl, and Trevor Darrell. 2017. Adversarial Feature Learning. In Proc. of the 5th International Conference on Learning Representations (ICLR’17).Google ScholarGoogle Scholar
  68. Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang. 2018. MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment. In Proc. of the 32nd AAAI Conference on Artificial Intelligence and 30th Innovative Applications of Artificial Intelligence Conference and 8th AAAI Symposium on Educational Advances in Artificial Intelligence.Google ScholarGoogle Scholar
  69. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In Proc. of the 9th International Conference on Learning Representations (ICLR’21).Google ScholarGoogle Scholar
  70. Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson,..., and Claire Cui. 2022. GLaM: Efficient Scaling of Language Models with Mixture-of-Experts. In Proc. of the 39th International Conference on Machine Learning (ICML’22).Google ScholarGoogle Scholar
  71. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. 2017. Adversarially Learned Inference. In Proc. of the 5th International Conference on Learning Representations (ICLR’17).Google ScholarGoogle Scholar
  72. Douglas Eck and Jurgen Schmidhuber. 2002. Finding temporal structure in music: blues improvisation with LSTM recurrent networks. In Proc. of the 12th IEEE Workshop on Neural Networks for Signal Processing.Google ScholarGoogle ScholarCross RefCross Ref
  73. Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny, and Marian Mazzone. 2017. CAN: Creative Adversarial Networks, Generating ”Art” by Learning About Styles and Deviating from Style Norms. In Proc. of the 8th International Conference on Computational Creativity (ICCC’17).Google ScholarGoogle Scholar
  74. Ahmed Elgammal and Babak Saleh. 2015. Quantifying Creativity in Art Networks. In Proc. of the 6th International Conference on Computational Creativity (ICCC’15).Google ScholarGoogle Scholar
  75. Benjamin Elizalde, Soham Deshmukh, Mahmoud Al Ismail, and Huaming Wang. 2023. CLAP: Learning Audio Concepts From Natural Language Supervision. In Proc. of the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’23).Google ScholarGoogle ScholarCross RefCross Ref
  76. Jesse Engel, Kumar Krishna Agrawal, Shuo Chen, Ishaan Gulrajani, Chris Donahue, and Adam Roberts. 2019. GANSynth: Adversarial Neural Audio Synthesis. In Proc. of the 7th International Conference on Learning Representations (ICLR’19).Google ScholarGoogle Scholar
  77. S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Koray Kavukcuoglu, and Geoffrey E. Hinton. 2016. Attend, Infer, Repeat: Fast Scene Understanding with Generative Models. In Advances in Neural Information Processing Systems (NeurIPS’16).Google ScholarGoogle Scholar
  78. Patrick Esser, Robin Rombach, and Bjorn Ommer. 2021. Taming Transformers for High-Resolution Image Synthesis. In Proc. of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’21).Google ScholarGoogle ScholarCross RefCross Ref
  79. William Fedus, Ian Goodfellow, and Andrew M. Dai. 2018. MaskGAN: Better Text Generation via Filling in the _______. In Proc. of the 6th International Conference on Learning Representations (ICLR’18).Google ScholarGoogle Scholar
  80. Nanyi Fei, Zhiwu Lu, Yizhao Gao, Guoxing Yang, Yuqi Huo, Jingyuan Wen, Haoyu Lu, Ruihua Song, Xin Gao, Tao Xiang, Hao Sun, and Ji-Rong Wen. 2022. Towards Artificial General Intelligence via a Multimodal Foundation Model. Nature Communications 13, 1 (2022), 3094.Google ScholarGoogle ScholarCross RefCross Ref
  81. Pablo Fernandes, Joao Nuno Correia, and Penousal Machado. 2020. Evolutionary Latent Space Exploration of Generative Adversarial Networks. In Proc. of the 2020 International Conference on the Applications of Evolutionary Computation (Part of EvoStar’20).Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Chrisantha Fernando, S. M. Ali Eslami, Jean-Baptiste Alayrac, Piotr Mirowski, Dylan Banarse, and Simon Osindero. 2021. Generative Art Using Neural Visual Grammars and Dual Encoders. (2021). arXiv:2105.00162Google ScholarGoogle Scholar
  83. David Foster. 2019. Generative Deep Learning. O’Reilly, Sebastopol, CA.Google ScholarGoogle Scholar
  84. Giorgio Franceschelli and Mirco Musolesi. 2022. Copyright in Generative Deep Learning. Data & Policy 4(2022), e17.Google ScholarGoogle ScholarCross RefCross Ref
  85. Giorgio Franceschelli and Mirco Musolesi. 2024. Reinforcement Learning for Generative AI: State of the Art, Opportunities and Open Research Challenges. Journal of Artificial Intelligence Research 79 (2024), 417–446.Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. Celso França, Luís Fabrício Wanderley Góes, Alvaro Amorim, Rodrigo C. O. Rocha, and Alysson Ribeiro Da Silva. 2016. Regent-Dependent Creativity: A Domain Independent Metric for the Assessment of Creative Artifacts. In Proc. of the 7th International Conference on Computational Creativity (ICCC’16).Google ScholarGoogle Scholar
  87. Ankush Ganguly and Samuel W. F. Earp. 2021. An Introduction to Variational Inference. (2021). arXiv:2108.13083Google ScholarGoogle Scholar
  88. Albert Gatt and Emiel Krahmer. 2018. Survey of the State of the Art in Natural Language Generation: Core Tasks, Applications and Evaluation. Journal of Artificial Intelligence Research 61, 1(2018), 65–170.Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. Leon Gatys, Alexander Ecker, and Matthias Bethge. 2016. A Neural Algorithm of Artistic Style. Journal of Vision 16, 12 (2016), 326.Google ScholarGoogle ScholarCross RefCross Ref
  90. Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, Aaron Hertzmann, and Eli Shechtman. 2017. Controlling Perceptual Factors in Neural Style Transfer. In Proc. of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17).Google ScholarGoogle ScholarCross RefCross Ref
  91. Berys Gaut. 2003. Creativity and Imagination. In The Creation of Art: New Essays in Philosophical Aesthetics. Cambridge University Press, 148–173.Google ScholarGoogle Scholar
  92. Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Aremu Anuoluwapo, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna Clinciu, Dipanjan Das, Kaustubh D. Dhole, Wanyu Du, Esin Durmus, Ondřej Dušek, Chris Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite,..., and Jiawei Zhou. 2021. The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics. In Proc. of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM’21).Google ScholarGoogle ScholarCross RefCross Ref
  93. Gemini Team and Google. 2023. Gemini: A Family of Highly Capable Multimodal Models. (2023). arXiv:2312.11805Google ScholarGoogle Scholar
  94. Gemini Team and Google. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. (2024). Retrieved April 30, 2024 from https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdfGoogle ScholarGoogle Scholar
  95. John Gero. 2000. Computational Models of Innovative and Creative Design Processes. Technological Forecasting and Social Change 64, 2-3(2000), 183–196.Google ScholarGoogle ScholarCross RefCross Ref
  96. Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, and Lingpeng Kong. 2023. DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models. In Proc. of the 11th International Conference on Learning Representations (ICLR’23).Google ScholarGoogle Scholar
  97. Ian Goodfellow. 2017. NIPS 2016 Tutorial: Generative Adversarial Networks. (2017). arXiv:1701.00160Google ScholarGoogle Scholar
  98. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems (NeurIPS’14).Google ScholarGoogle Scholar
  99. Kazjon Grace and Mary Lou Maher. 2014. What to Expect when you’re Expecting: The Role of Unexpectedness in Computationally Evaluating Creativity. In Proc. of the 5th International Conference on Computational Creativity (ICCC’14).Google ScholarGoogle Scholar
  100. Daniele Gravina, Antonios Liapis, and Georgios Yannakakis. 2016. Surprise Search: Beyond Objectives and Novelty. In Proc. of the Genetic and Evolutionary Computation Conference (GECCO’16).Google ScholarGoogle ScholarDigital LibraryDigital Library
  101. Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. 2015. DRAW: A Recurrent Neural Network For Image Generation. In Proc. of the 32nd International Conference on Machine Learning (ICML’15).Google ScholarGoogle Scholar
  102. Albert Gu, Karan Goel, and Christopher Re. 2022. Efficiently Modeling Long Sequences with Structured State Spaces. In Proc. of the 10th International Conference on Learning Representations (ICLR’22).Google ScholarGoogle Scholar
  103. Jie Gui, Z. Sun, Yonggang Wen, Dacheng Tao, and Ye Jie-ping. 2021. A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications. IEEE Transactions on Knowledge and Data Engineering 35, 4(2021), 3313–3332.Google ScholarGoogle ScholarDigital LibraryDigital Library
  104. Gabriel L. Guimaraes, Benjamin Sanchez-Lengeling, Pedro Luis Cunha Farias, and Alan Aspuru-Guzik. 2017. Objective-Reinforced Generative Adversarial Networks (ORGAN) for Sequence Generation Models. (2017). arXiv:1705.10843Google ScholarGoogle Scholar
  105. Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez, and Aaron Courville. 2017. PixelVAE: A Latent Variable Model for Natural Images. In Proc. of the 5th International Conference on Learning Representations (ICLR’17).Google ScholarGoogle Scholar
  106. Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and Yuanzhi Li. 2023. Textbooks Are All You Need. (2023). arXiv:2306.11644Google ScholarGoogle Scholar
  107. Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. 2018. Long Text Generation via Adversarial Training with Leaked Information. In Proc. of the 32nd AAAI Conference on Artificial Intelligence and 30th Innovative Applications of Artificial Intelligence Conference and 8th AAAI Symposium on Educational Advances in Artificial Intelligence.Google ScholarGoogle Scholar
  108. Matthew Guzdial and Mark O. Riedl. 2019. An Interaction Framework for Studying Co-Creative AI. (2019). arXiv:1903.09709Google ScholarGoogle Scholar
  109. Rafael Gómez-Bombarelli, Jennifer N. Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, Ryan P. Adams, and Alán Aspuru-Guzik. 2018. Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules. ACS Central Science 4, 2 (2018), 268–276.Google ScholarGoogle ScholarCross RefCross Ref
  110. G.M. Harshvardhan, Mahendra Kumar Gourisaria, Manjusha Pandey, and Siddharth Swarup Rautaray. 2020. A Comprehensive Survey and Analysis of Generative Models in Machine Learning. Computer Science Review 38 (2020), 100285.Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. 2022. Masked Autoencoders Are Scalable Vision Learners. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’22). 16000–16009.Google ScholarGoogle ScholarCross RefCross Ref
  112. Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A. Lemley, and Percy Liang. 2023. Foundation Models and Fair Use. (2023). arXiv:2303.15715Google ScholarGoogle Scholar
  113. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In Proc. of the 5th International Conference on Learning Representations (ICLR’17).Google ScholarGoogle Scholar
  114. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the Knowledge in a Neural Network. In Proc. of the NeurIPS’15 Deep Learning and Representation Learning Workshop.Google ScholarGoogle Scholar
  115. Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems (NeurIPS’20).Google ScholarGoogle Scholar
  116. Jonathan Ho, Chitwan Saharia, William Chan, David J. Fleet, Mohammad Norouzi, and Tim Salimans. 2022. Cascaded Diffusion Models for High Fidelity Image Generation. Journal of Machine Learning Research 23, 47 (2022), 1–33.Google ScholarGoogle Scholar
  117. Jonathan Ho and Tim Salimans. 2021. Classifier-Free Diffusion Guidance. In Proc. of the NeurIPS’21 Workshop on Deep Generative Models and Downstream Applications.Google ScholarGoogle Scholar
  118. Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J. Fleet. 2022. Video Diffusion Models. In Proc. of the ICLR’22 Workshop on Deep Generative Models for Highly Structured Data.Google ScholarGoogle Scholar
  119. Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9, 8 (1997), 1735–1780.Google ScholarGoogle ScholarDigital LibraryDigital Library
  120. Douglas R. Hofstadter and Melanie Mitchell. 1994. The Copycat Project: A Model of Mental Fluidity and Analogy-Making. In Advances in Connectionist and Neural Computation Theory, Vol. 2. Analogical Connections. Ablex Publishing, 31–112.Google ScholarGoogle Scholar
  121. Jack Hopkins and Douwe Kiela. 2017. Automatically Generating Rhythmic Verse with Neural Networks. In Proc. of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).Google ScholarGoogle ScholarCross RefCross Ref
  122. Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Ian Simon, Curtis Hawthorne, Noam Shazeer, Andrew M. Dai, Matthew D. Hoffman, Monica Dinculescu, and Douglas Eck. 2019. Music Transformer. In Proc. of the 7th International Conference on Learning Representations (ICLR’19).Google ScholarGoogle Scholar
  123. Qingqing Huang, Aren Jansen, Joonseok Lee, Ravi Ganti, Judith Yue Li, and Daniel P. W. Ellis. 2022. MuLan: A Joint Embedding of Music Audio and Natural Language. In Proc. of the 23rd International Society for Music Information Retrieval Conference (ISMIR’22).Google ScholarGoogle Scholar
  124. Zhewei Huang, Shuchang Zhou, and Wen Heng. 2019. Learning to Paint With Model-Based Deep Reinforcement Learning. In Proc. of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV’19).Google ScholarGoogle ScholarCross RefCross Ref
  125. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-Image Translation with Conditional Adversarial Networks. In Proc. of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17).Google ScholarGoogle ScholarCross RefCross Ref
  126. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical Reparameterization with Gumbel-Softmax. In Proc. of the 5th International Conference on Learning Representations (ICLR’17).Google ScholarGoogle Scholar
  127. Joel Jang, Sumin Shin, and Yoonjeon Kim. 2022. Music2Video: Automatic Generation of Music Video with Fusion of Audio and Text. (2022). arXiv:2201.03809Google ScholarGoogle Scholar
  128. Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, José Miguel Hernández-Lobato, Richard E. Turner, and Douglas Eck. 2017. Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control. In Proc. of the 34th International Conference on Machine Learning (ICML’17).Google ScholarGoogle Scholar
  129. Natasha Jaques, Shixiang Gu, Richard E. Turner, and Douglas Eck. 2016. Generating Music by Fine-Tuning Recurrent Neural Networks with Reinforcement Learning. In Proc. of the NeurIPS’16 Deep Reinforcement Learning Workshop.Google ScholarGoogle Scholar
  130. Natasha Jaques, Shixiang Gu, Richard E. Turner, and Douglas Eck. 2017. Tuning Recurrent Neural Networks with Reinforcement Learning. In Proc. of the ICLR’17 Workshop.Google ScholarGoogle Scholar
  131. Divyansh Jha, Hanna Chang, and Mohamed Elhoseiny. 2021. Wolfflin’s Affective Generative Analysis for Visual Art. In Proc. of the 20th International Conference on Computational Creativity (ICCC’21).Google ScholarGoogle Scholar
  132. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision. In Proc. of the 38th International Conference on Machine Learning (ICML’21).Google ScholarGoogle Scholar
  133. Nan Jiang, Sheng Jin, Zhiyao Duan, and Changshui Zhang. 2020. RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement Learning. In Proc. of the 34th AAAI Conference on Artificial Intelligence, the 32nd Innovative Applications of Artificial Intelligence Conference, the 10th AAAI Symposium on Educational Advances in Artificial Intelligence.Google ScholarGoogle Scholar
  134. Yanghua Jin, Jiakai Zhang, Minjun Li, Yingtao Tian, Huachun Zhu, and Zhihao Fang. 2017. Towards the Automatic Anime Characters Creation with Generative Adversarial Networks. (2017). arXiv:1708.05509Google ScholarGoogle Scholar
  135. Michael I. Jordan, Zoubin Ghahrmamani, Tommi S. Jaakkola, and Lawrence K. Saul. 1999. An Introduction to Variational Methods for Graphical Models. Machine Learning 37(1999), 183–233.Google ScholarGoogle ScholarDigital LibraryDigital Library
  136. Anna Jordanous. 2012. A Standardised Procedure for Evaluating Creative Systems: Computational Creativity Evaluation Based on What it is to be Creative. Cognitive Computation 4(2012), 246–279.Google ScholarGoogle ScholarCross RefCross Ref
  137. Anna Jordanous. 2014. Stepping Back to Progress Forwards: Setting Standards for Meta-Evaluation of Computational Creativity. In Proc. of the 5th International Conference on Computational Creativity (ICCC’14).Google ScholarGoogle Scholar
  138. Anna Jordanous. 2016. Four PPPPerspectives on Computational Creativity in Theory and in Practice. Connection Science 28, 2 (2016), 294–216.Google ScholarGoogle ScholarCross RefCross Ref
  139. John M. Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Zidek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andy Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas ... Adler, and Demis Hassabis. 2021. Highly Accurate Protein Structure Prediction with AlphaFold. Nature 596(2021), 583–589.Google ScholarGoogle ScholarCross RefCross Ref
  140. Artur Kadurin, Sergey Nikolenko, Kuzma Khrabrov, Alex Aliper, and Alex Zhavoronkov. 2017. druGAN: An Advanced Generative Adversarial Autoencoder Model for de Novo Generation of New Molecules with Desired Molecular Properties in Silico. Molecular Pharmaceutics 14, 9 (2017), 3098–3104.Google ScholarGoogle ScholarCross RefCross Ref
  141. Pythagoras Karampiperis, Antonis Koukourikos, and Evangelia Koliopoulou. 2014. Towards Machines for Measuring Creativity: The Use of Computational Tools in Storytelling Activities. In Proc. of the 2014 IEEE 14th International Conference on Advanced Learning Technologies (ICALT’14).Google ScholarGoogle ScholarDigital LibraryDigital Library
  142. Andrej Karpathy. 2015. The Unreasonable Effectiveness of Recurrent Neural Networks. (2015). Retrieved April 30, 2024 from https://karpathy.github.io/2015/05/21/rnn-effectivenessGoogle ScholarGoogle Scholar
  143. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In Proc. of the 6th International Conference on Learning Representations (ICLR’18).Google ScholarGoogle Scholar
  144. Tero Karras, Miika Aittala, Samuli Laine, Erik Harkonen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2021. Alias-Free Generative Adversarial Networks. In Advances in Neural Information Processing Systems (NeurIPS’21).Google ScholarGoogle Scholar
  145. Tero Karras, Samuli Laine, and Timo Aila. 2019. A Style-Based Generator Architecture for Generative Adversarial Networks. In Proc. of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’19).Google ScholarGoogle ScholarCross RefCross Ref
  146. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2020. Analyzing and Improving the Image Quality of StyleGAN. In Proc. of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’20).Google ScholarGoogle ScholarCross RefCross Ref
  147. Hadi Kazemi, Seyed Mehdi Iranmanesh, and Nasser Nasrabadi. 2019. Style and Content Disentanglement in Generative Adversarial Networks. In Proc. of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV’19).Google ScholarGoogle ScholarCross RefCross Ref
  148. Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. 2022. Transformers in Vision: A Survey. Comput. Surveys 54, 10s (2022), 1–41.Google ScholarGoogle ScholarDigital LibraryDigital Library
  149. Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. 2014. Semi-supervised Learning with Deep Generative Models. In Advances in Neural Information Processing Systems (NeurIPS’14).Google ScholarGoogle Scholar
  150. Diederik P. Kingma and Max Welling. 2014. Auto-Encoding Variational Bayes. In Proc. of the 2nd International Conference on Learning Representations (ICLR’14).Google ScholarGoogle Scholar
  151. Diederik P. Kingma and Max Welling. 2019. An Introduction to Variational Autoencoders. Foundations and Trends in Machine Learning 12, 4 (2019), 307–392.Google ScholarGoogle ScholarDigital LibraryDigital Library
  152. Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. 2021. DiffWave: A Versatile Diffusion Model for Audio Synthesis. In Proc. of the 9th International Conference on Learning Representations (ICLR’21).Google ScholarGoogle Scholar
  153. Carolyn Lamb, Daniel G. Brown, and Charles L. A. Clarke. 2018. Evaluating Computational Creativity: An Interdisciplinary Tutorial. Comput. Surveys 51, 2 (2018), 1–34.Google ScholarGoogle ScholarDigital LibraryDigital Library
  154. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In Proc. of the 8th International Conference on Learning Representations (ICLR’20).Google ScholarGoogle Scholar
  155. Pat Langley, Herbert A. Simon, Gary L. Bradshaw, and Jan M. Zytkow. 1987. Scientific Discovery: Computational Explorations of the Creative Process. The MIT Press, Cambridge, MA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  156. Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. 2016. Autoencoding beyond Pixels Using a Learned Similarity Metric. In Proc. of the 33rd International Conference on Machine Learning (ICML’16).Google ScholarGoogle Scholar
  157. Jey Han Lau, Trevor Cohn, Timothy Baldwin, Julian Brooke, and Adam Hammond. 2018. Deep-Speare: A Joint Neural Model of Poetic Language, Meter and Rhyme. In Proc. of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).Google ScholarGoogle ScholarCross RefCross Ref
  158. Phong Le and Willem Zuidema. 2016. Quantifying the Vanishing Gradient and Long Distance Dependency Problem in Recursive Neural Networks and Recursive LSTMs. In Proc. of the 1st Workshop on Representation Learning for NLP.Google ScholarGoogle ScholarCross RefCross Ref
  159. Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. 2023. RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback. (2023). arXiv:2309.00267Google ScholarGoogle Scholar
  160. Helena H. Lee, Ke Shu, Palakorn Achananuparp, Philips Kokoh Prasetyo, Yue Liu, Ee-Peng Lim, and Lav R. Varshney. 2020. RecipeGPT: Generative Pre-Training Based Cooking Recipe Generation and Evaluation System. In Companion Proceedings of the Web Conference 2020.Google ScholarGoogle Scholar
  161. Joel Lehman and Kenneth O. Stanley. 2011. Abandoning Objectives: Evolution Through the Search for Novelty Alone. Evolutionary Computation 19, 2 (2011), 189–223.Google ScholarGoogle ScholarDigital LibraryDigital Library
  162. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. In Proc. of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP’21).Google ScholarGoogle ScholarCross RefCross Ref
  163. Yoav Levine, Itay Dalmedigos, Ori Ram, Yoel Zeldes, Daniel Jannai, Dor Muhlgay, Yoni Osin, Opher Lieber, Barak Lenz, Shai Shalev-Shwartz, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2022. Standing on the Shoulders of Giant Frozen Language Models. (2022). arXiv:2204.10019Google ScholarGoogle Scholar
  164. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In Proc. of the 58th Annual Meeting of the Association for Computational Linguistics (ACL’20).Google ScholarGoogle ScholarCross RefCross Ref
  165. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep Reinforcement Learning for Dialogue Generation. In Proc. of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP’16).Google ScholarGoogle ScholarCross RefCross Ref
  166. Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu. 2019. Neural Speech Synthesis with Transformer Network. In Proc. of the 33rd AAAI Conference on Artificial Intelligence and 31st Innovative Applications of Artificial Intelligence Conference and 9th AAAI Symposium on Educational Advances in Artificial Intelligence.Google ScholarGoogle Scholar
  167. Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori Hashimoto. 2022. Diffusion-LM Improves Controllable Text Generation. In Advances in Neural Information Processing Systems (NeurIPS’22).Google ScholarGoogle Scholar
  168. Antonios Liapis, Hector P. Martinez, Julian Togelius, and Georgios N. Yannakakis. 2013. Transforming Exploratory Creativity with DeLeNoX. In Proc. of the 4th International Conference on Computational Creativity (ICCC’13).Google ScholarGoogle Scholar
  169. Antonios Liapis, Georgios N. Yannakakis, and Julian Togelius. 2013. Enhancements to Constrained Novelty Search: Two-Population Novelty Search for Generating Game Content. In Proc. of the 15th Annual Conference on Genetic and Evolutionary Computation (GECCO’13).Google ScholarGoogle ScholarDigital LibraryDigital Library
  170. Bryan Lim and Stefan Zohren. 2021. Time-series Forecasting with Deep Learning: A Survey. Philosophical Transactions of the Royal Society A 379 (2021), 20200209.Google ScholarGoogle ScholarCross RefCross Ref
  171. Haohe Liu, Zehua Chen, Yi Yuan, Xinhao Mei, Xubo Liu, Danilo Mandic, Wenwu Wang, and Mark D Plumbley. 2023. AudioLDM: Text-to-Audio Generation with Latent Diffusion Models. In Proc. of the 40th International Conference on Machine Learning (ICML’23). 21450–21474.Google ScholarGoogle Scholar
  172. Jialin Liu, Sam Snodgrass, Ahmed Khalifa, Sebastian Risi, Georgios N. Yannakakis, and Julian Togelius. 2021. Deep Learning for Procedural Content Generation. Neural Computing and Applications 33 (2021), 19–37.Google ScholarGoogle ScholarDigital LibraryDigital Library
  173. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. (2019). arXiv:1907.11692Google ScholarGoogle Scholar
  174. Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. 2022. RePaint: Inpainting Using Denoising Diffusion Probabilistic Models. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’22).Google ScholarGoogle ScholarCross RefCross Ref
  175. Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. 2016. Auxiliary Deep Generative Models. In Proc. of the 33rd International Conference on Machine Learning (ICML’16).Google ScholarGoogle Scholar
  176. Penousal Machado, Juan Romero, Antonino Santos, Amílcar Cardoso, and Alejandro Pazos. 2007. On the Development of Evolutionary Artificial Artists. Computers and Graphics 31, 6 (2007), 818–826.Google ScholarGoogle ScholarDigital LibraryDigital Library
  177. Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. In Proc. of the 5th International Conference on Learning Representations (ICLR’17).Google ScholarGoogle Scholar
  178. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In Proc. of the 6th International Conference on Learning Representations (ICLR’18).Google ScholarGoogle Scholar
  179. Mary Maher. 2010. Evaluating Creativity in Humans, Computers, and Collectively Intelligent Systems. In Proc. of the 1st DESIRE Network Conference on Creativity and Innovation in Design.Google ScholarGoogle ScholarDigital LibraryDigital Library
  180. Mary Maher and Doug Fisher. 2012. Using AI to Evaluate Creative Designs. In Proc. of the 2nd International Conference on Design Creativity (ICDC’12).Google ScholarGoogle Scholar
  181. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. 2016. Adversarial Autoencoders. In Proc. of the 4th International Conference on Learning Representations (ICLR’16).Google ScholarGoogle Scholar
  182. Lara J. Martin, Prithviraj Ammanabrolu, William Hancock, Shruti Singh, Brent Harrison, and Mark O. Riedl. 2018. Event Representations for Automated Story Generation with Deep Neural Nets. In Proc. of the 32nd AAAI Conference on Artificial Intelligence and 30th Innovative Applications of Artificial Intelligence Conference and 8th AAAI Symposium on Educational Advances in Artificial Intelligence.Google ScholarGoogle Scholar
  183. James R. Meehan. 1977. TALE-SPIN, an Interactive Program That Writes Stories. In Proc. of the 5th International Joint Conference on Artificial Intelligence - Volume 1 (IJCAI’77).Google ScholarGoogle Scholar
  184. Luigi Federico Menabrea and Ada Lovelace. 1843. Sketch of The Analytical Engine Invented by Charles Babbage. In Scientific Memoirs. Vol.  3. Richard and John E. Taylor, 666–731.Google ScholarGoogle Scholar
  185. Oscar Mendez-Lucio, Benoit Baillif, Djork-Arné Clevert, David Rouquié, and Joerg Wichard. 2020. De novo generation of hit-like molecules from gene expression signatures using artificial intelligence. Nature Communications 11, 10 (2020), 1–10.Google ScholarGoogle ScholarCross RefCross Ref
  186. Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. 2017. Unrolled Generative Adversarial Networks. In Proc. of the 5th International Conference on Learning Representations (ICLR’17).Google ScholarGoogle Scholar
  187. Arthur I. Miller. 2019. The Artist in the Machine. The MIT Press, Cambridge, MA.Google ScholarGoogle Scholar
  188. Marvin Minsky. 2006. The Emotion Machine. Simon & Schuster, New York, NY.Google ScholarGoogle Scholar
  189. Mehdi Mirza and Simon Osindero. 2014. Conditional Generative Adversarial Nets. (2014). arXiv:1411.1784Google ScholarGoogle Scholar
  190. Gautam Mittal, Jesse Engel, Curtis Hawthorne, and Ian Simon. 2021. Symbolic Music Generation with Diffusion Models. In Proc. of the 22nd Int. Society for Music Information Retrieval Conf. (ISMIR’21).Google ScholarGoogle Scholar
  191. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-Level Control Through Deep Reinforcement Learning. Nature 518(2015), 529–533.Google ScholarGoogle ScholarCross RefCross Ref
  192. Alexander Mordvintsev, Christopher Olah, and Mike Tyka. 2015. Inceptionism: Going Deeper into Neural Networks. (2015). Retrieved April 30, 2024 from https://blog.research.google/2015/06/inceptionism-going-deeper-into-neural.htmlGoogle ScholarGoogle Scholar
  193. Richard G. Morris, Scott H. Burton, Paul Bodily, and Dan Ventura. 2012. Soup Over Bean of Pure Joy: Culinary Ruminations of an Artificial Chef. In Proc. of the 3rd International Conference on Computational Creativity (ICCC’12).Google ScholarGoogle Scholar
  194. Saman Motamed, Patrik Rogalla, and Farzad Khalvati. 2021. RANDGAN: Randomized Generative Adversarial Network for Detection of COVID-19 in Chest X-Ray. Scientific Reports 11(2021), 8602.Google ScholarGoogle ScholarCross RefCross Ref
  195. Allen Newell, J. C. Shaw, and Herbert A. Simon. 1962. The Processes of Creative Thinking. In Contemporary Approaches to Creative Thinking: A Symposium Held at the University of Colorado. Atherton Press, 63–119.Google ScholarGoogle Scholar
  196. Anh Nguyen, Jeff Clune, Yoshua Bengio, Alexey Dosovitskiy, and Jason Yosinski. 2017. Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space. In Proc. of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17).Google ScholarGoogle ScholarCross RefCross Ref
  197. Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. 2016. Synthesizing the Preferred Inputs for Neurons in Neural Networks via Deep Generator Networks. In Advances in Neural Information Processing Systems (NeurIPS’16).Google ScholarGoogle Scholar
  198. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. 2022. GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. In Proc. of the 39th International Conference on Machine Learning (ICML’22).Google ScholarGoogle Scholar
  199. Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. 2022. Point-E: A System for Generating 3D Point Clouds from Complex Prompts. (2022). arXiv:2212.08751Google ScholarGoogle Scholar
  200. Alexander Quinn Nichol and Prafulla Dhariwal. 2021. Improved Denoising Diffusion Probabilistic Models. In Proc. of the 38th International Conference on Machine Learning (ICML’21).Google ScholarGoogle Scholar
  201. Weili Nie, Nina Narodytska, and Ankit Patel. 2019. RelGAN: Relational Generative Adversarial Networks for Text Generation. In Proc. of the 7th International Conference on Learning Representations (ICLR’19).Google ScholarGoogle Scholar
  202. David Norton, Derral Heath, and Dan Ventura. 2010. Establishing Appreciation in a Creative System. In Proc. of the 1st International Conference on Computational Creativity (ICCC’15).Google ScholarGoogle Scholar
  203. Augustus Odena. 2016. Semi-Supervised Learning with Generative Adversarial Networks. (2016). arXiv:1606.01583Google ScholarGoogle Scholar
  204. Augustus Odena, Christopher Olah, and Jonathon Shlens. 2017. Conditional Image Synthesis with Auxiliary Classifier GANs. In Proc. of the 34th International Conference on Machine Learning (ICML’17).Google ScholarGoogle ScholarDigital LibraryDigital Library
  205. Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. 2017. Feature Visualization. (2017). Distill.Google ScholarGoogle Scholar
  206. OpenAI. 2022. Introducing ChatGPT. (2022). Retrieved April 30, 2024 from https://openai.com/blog/chatgptGoogle ScholarGoogle Scholar
  207. OpenAI. 2023. GPT-4 Technical Report. (2023). arXiv:2303.08774Google ScholarGoogle Scholar
  208. Rafael Pardinas, Gabriel Huang, David Vazquez, and Alexandre Piché. 2023. Leveraging Human Preferences to Master Poetry. In Proc. of the AAAI’23 Workshop on Creative AI Across Modalities.Google ScholarGoogle Scholar
  209. Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. 2018. Image Transformer. In Proc. of the 35th International Conference on Machine Learning (ICML’18).Google ScholarGoogle Scholar
  210. Christine Payne. 2019. MuseNet. (2019). Retrieved April 30, 2024 from https://openai.com/blog/musenetGoogle ScholarGoogle Scholar
  211. Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Stella Biderman, Huanqi Cao, Xin Cheng, Michael Chung, Leon Derczynski, Xingjian Du, Matteo Grella, Kranthi Gv, Xuzheng He, Haowen Hou, Przemyslaw Kazienko, Jan Kocon, Jiaming Kong, Bartłomiej Koptyra,..., and Rui-Jie Zhu. 2023. RWKV: Reinventing RNNs for the Transformer Era. In Findings of the Association for Computational Linguistics: EMNLP’23.Google ScholarGoogle Scholar
  212. Francisco C. Pereira, Mateus Mendes, Pablo Gervas, and Amilcar Cardoso. 2005. Experiments With Assessment of Creative Systems: An Application of Ritchie’s Criteria. In Proc. of the IJCAI’15 Second Computational Creativity Workshop.Google ScholarGoogle Scholar
  213. Peter Potash, Alexey Romanov, and Anna Rumshisky. 2015. GhostWriter: Using an LSTM for Automatic Rap Lyric Generation. In Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP’15).Google ScholarGoogle ScholarCross RefCross Ref
  214. Racter. 1984. The Policeman’s Beard Is Half Constructed. Warner Books, Inc., New York, NY.Google ScholarGoogle Scholar
  215. Alec Radford. 2018. Improving Language Understanding with Unsupervised Learning. (2018). Retrieved April 30, 2024 from https://openai.com/blog/language-unsupervisedGoogle ScholarGoogle Scholar
  216. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning Transferable Visual Models From Natural Language Supervision. In Proc. of the 38th International Conference on Machine Learning (ICML’21).Google ScholarGoogle Scholar
  217. Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In Proc. of the 4th International Conference on Learning Representations (ICLR’16).Google ScholarGoogle Scholar
  218. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. (2019). Retrieved April 30, 2024 from https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdfGoogle ScholarGoogle Scholar
  219. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. 2023. Direct Preference Optimization: Your Language Model is Secretly a Reward Model. In Proc. of the 37th Conference on Neural Information Processing Systems (NeurIPS’23).Google ScholarGoogle Scholar
  220. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research 21, 140 (2020), 1–67.Google ScholarGoogle Scholar
  221. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical Text-Conditional Image Generation with CLIP Latents. (2022). arXiv:2204.06125Google ScholarGoogle Scholar
  222. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence Level Training with Recurrent Neural Networks. In Proc. of the 4th International Conference on Learning Representations (ICLR’16).Google ScholarGoogle Scholar
  223. Hannah Rashkin, Asli Celikyilmaz, Yejin Choi, and Jianfeng Gao. 2020. PlotMachines: Outline-Conditioned Generation with Dynamic Plot State Tracking. In Proc. of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP’20).Google ScholarGoogle ScholarCross RefCross Ref
  224. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative Adversarial Text to Image Synthesis. In Proc. of the 33rd International Conference on Machine Learning (ICML’16).Google ScholarGoogle Scholar
  225. Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, Tom Eccles, Jake Bruce, Ali Razavi, Ashley Edwards, Nicolas Heess, Yutian Chen, Raia Hadsell, Oriol Vinyals, Mahyar Bordbar, and Nando de Freitas. 2022. A Generalist Agent. (2022). Transactions on Machine Learning Research.Google ScholarGoogle Scholar
  226. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In Proc. of the 31st International Conference on Machine Learning (ICML’14).Google ScholarGoogle Scholar
  227. Mel Rhodes. 1961. An Analysis of Creativity. The Phi Delta Kappan 42, 7 (1961), 305–310.Google ScholarGoogle Scholar
  228. Mark O. Riedl. 2014. The Lovelace 2.0 Test of Artificial Creativity and Intelligence. (2014). arXiv:1410.6142Google ScholarGoogle Scholar
  229. Graeme Ritchie. 2007. Some Empirical Criteria for Attributing Creativity to a Computer Program. Minds and Machines 17(2007), 67–99.Google ScholarGoogle ScholarDigital LibraryDigital Library
  230. Adam Roberts, Jesse Engel, Colin Raffel, Curtis Hawthorne, and Douglas Eck. 2018. A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music. In Proc. of the 35th International Conference on Machine Learning (ICML’18).Google ScholarGoogle Scholar
  231. Melissa Roemmele and Andrew S. Gordon. 2018. Automated Assistance for Creative Writing with an RNN Language Model. In Proc. of the 23rd International Conference on Intelligent User Interfaces Companion (IUI’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  232. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis With Latent Diffusion Models. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’22).Google ScholarGoogle ScholarCross RefCross Ref
  233. Jon Rowe and Derek Partridge. 1993. Creativity: A Survey of AI Approaches. Artificial Intelligence Review 7 (1993), 43–70.Google ScholarGoogle ScholarCross RefCross Ref
  234. Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. 2023. DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’23).Google ScholarGoogle ScholarCross RefCross Ref
  235. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. In Advances in Neural Information Processing Systems (NeurIPS’22).Google ScholarGoogle Scholar
  236. Sameh Said Metwaly, Wim Van den Noortgate, and Eva Kyndt. 2017. Approaches to Measuring Creativity: A Systematic Literature Review. Creativity. Theories – Research - Applications 4, 2(2017), 238–275.Google ScholarGoogle Scholar
  237. Pamela Samuelson. 1985. Allocating ownership rights in computer-generated works. University of Pittsburgh Law Review 47 (1985), 1185.Google ScholarGoogle Scholar
  238. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In Proc. of the NeurIPS’19 Workshop.Google ScholarGoogle Scholar
  239. Othman Sbai, Mohamed Elhoseiny, Antoine Bordes, Yann LeCun, and Camille Couprie. 2019. DesIGN: Design Inspiration from Generative Networks. In Proc. of the Computer Vision - ECCV’18 Workshops.Google ScholarGoogle ScholarDigital LibraryDigital Library
  240. Jürgen Schmidhuber. 2010. Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010). IEEE Transactions on Autonomous Mental Development 2, 3(2010), 230–247.Google ScholarGoogle ScholarDigital LibraryDigital Library
  241. Michael D. Schmidt and Hod Lipson. 2009. Distilling Free-Form Natural Laws from Experimental Data. Science 324(2009), 81–85.Google ScholarGoogle ScholarCross RefCross Ref
  242. Victor Schmidt, Alexandra Sasha Luccioni, Mélisande Teng, Tianyu Zhang, Alexia Reynaud, Sunand Raghupathi, Gautier Cosne, Adrien Juraver, Vahe Vardanyan, Alex Hernandez-Garcia, and Yoshua Bengio. 2022. ClimateGAN: Raising Climate Change Awareness by Generating Images of Floods. In Proc. of the 10th International Conference on Learning Representations (ICLR’22).Google ScholarGoogle Scholar
  243. Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. A Hybrid Convolutional Variational Autoencoder for Text Generation. In Proc. of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP’17).Google ScholarGoogle ScholarCross RefCross Ref
  244. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer. In Proc. of the 5th International Conference on Learning Representations (ICLR’17).Google ScholarGoogle Scholar
  245. Zhan Shi, Xinchi Chen, Xipeng Qiu, and Xuanjing Huang. 2018. Toward Diverse Text Generation with Inverse Reinforcement Learning. In Proc. of the 27th International Joint Conference on Artificial Intelligence (IJCAI’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  246. Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. (2019). arXiv:1909.08053Google ScholarGoogle Scholar
  247. Jaskirat Singh, Cameron Smith, Jose Echevarria, and Liang Zheng. 2022. Intelli-Paint: Towards Developing More Human-Intelligible Painting Agents. In Proc. of the 17th European Conference on Computer Vision (ECCV’22).Google ScholarGoogle ScholarDigital LibraryDigital Library
  248. Ivan Skorokhodov, Sergey Tulyakov, and Mohamed Elhoseiny. 2022. StyleGAN-V: A Continuous Video Generator with the Price, Image Quality and Perks of StyleGAN2. In Proc. of the 2022 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’22).Google ScholarGoogle ScholarCross RefCross Ref
  249. Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, and Bryan Catanzaro. 2022. Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model. (2022). arXiv:2201.11990Google ScholarGoogle Scholar
  250. Charlie Snell. 2021. Alien Dreams: An Emerging Art Scene. (2021). Retrieved April 30, 2024 from https://mlberkeley.substack.com/p/clip-art/Google ScholarGoogle Scholar
  251. Charlie V. Snell, Ilya Kostrikov, Yi Su, Sherry Yang, and Sergey Levine. 2023. Offline RL for Natural Language Generation with Implicit Language Q Learning. In Proc. of the 11th International Conference on Learning Representations (ICLR’23).Google ScholarGoogle Scholar
  252. Benjamin Sobel. 2020. A Taxonomy of Training Data: Disentangling the Mismatched Rights, Remedies, and Rationales for Restricting Machine Learning. Artificial Intelligence and Intellectual Property (2020), 36.Google ScholarGoogle Scholar
  253. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. In Proc. of the 32nd International Conference on Machine Learning (ICML’15).Google ScholarGoogle Scholar
  254. Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. 2023. Consistency models. In Proc. of the 40th International Conference on Machine Learning (ICML’23).Google ScholarGoogle ScholarDigital LibraryDigital Library
  255. Yang Song and Stefano Ermon. 2019. Generative Modeling by Estimating Gradients of the Data Distribution. In Advances in Neural Information Processing Systems (NeurIPS’19).Google ScholarGoogle Scholar
  256. Yang Song and Stefano Ermon. 2020. Improved Techniques for Training Score-Based Generative Models. In Advances in Neural Information Processing Systems (NeurIPS’20).Google ScholarGoogle Scholar
  257. Yang Song, Yascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. 2021. Score-Based Generative Modeling through Stochastic Differential Equations. In Proc. of the 9th International Conference on Learning Representations.Google ScholarGoogle Scholar
  258. Michael Steinbach, Levent Ertöz, and Vipin Kumar. 2004. The Challenges of Clustering High Dimensional Data. New Directions in Statistical Physics 213 (2004), 273–309.Google ScholarGoogle ScholarCross RefCross Ref
  259. Claire Stevenson, Iris Smal, Matthijs Baas, Raoul Grasman, and Han van der Maas. 2022. Putting GPT-3’s Creativity to the (Alternative Uses) Test. In Proc. of the 13th International Conference on Computational Creativity (ICCC’22).Google ScholarGoogle Scholar
  260. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. In Advances in Neural Information Processing Systems (NeurIPS’20).Google ScholarGoogle Scholar
  261. Bob L. Sturm, João Felipe Santos, Oded Ben-Tal, and Iryna Korshunova. 2016. Music Transcription Modelling and Composition Using Deep Learning. In Proc. of the 1st Conference on Computer Simulation of Musical Creativity (CSMC’16).Google ScholarGoogle Scholar
  262. Douglas Summers-Stay, Clare R. Voss, and Stephanie M. Lukin. 2023. Brainstorm, then Select: a Generative Language Model Improves Its Creativity Score. In Proc. of the AAAI’23 Workshop on Creative AI Across Modalities.Google ScholarGoogle Scholar
  263. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems (NeurIPS’14).Google ScholarGoogle Scholar
  264. Masahiro Suzuki and Yutaka Matsuo. 2022. A Survey of Multimodal Deep Generative Models. Advanced Robotics 36, 5-6 (2022), 261–278.Google ScholarGoogle ScholarCross RefCross Ref
  265. Pradyumna Tambwekar, Murtaza Dhuliawala, Lara J. Martin, Animesh Mehta, Brent Harrison, and Mark O. Riedl. 2019. Controllable Neural Story Plot Generation via Reward Shaping. In Proc. of the 28th International Joint Conference on Artificial Intelligence (IJCAI’19).Google ScholarGoogle ScholarCross RefCross Ref
  266. Wei Ren Tan, Chee Seng Chan, Hernán E. Aguirre, and Kiyoshi Tanaka. 2017. ArtGAN: Artwork Synthesis with Conditional Categorical GANs. In Proc. of the 2017 IEEE International Conference on Image Processing (ICIP’17).Google ScholarGoogle ScholarDigital LibraryDigital Library
  267. Yingtao Tian and David Ha. 2022. Modern Evolution Strategies for Creativity: Fitting Concrete Images and Abstract Concepts. In Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART 2022).Google ScholarGoogle ScholarDigital LibraryDigital Library
  268. Hannu Toivonen and Oskar Gross. 2015. Data Mining and Machine Learning in Computational Creativity. WIREs Data Mining and Knowledge Discovery 5, 6 (2015), 265–275.Google ScholarGoogle ScholarDigital LibraryDigital Library
  269. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. LLaMA: Open and Efficient Foundation Language Models. (2023). arXiv:2302.13971Google ScholarGoogle Scholar
  270. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu,..., and Thomas Scialom. 2023. Llama 2: Open Foundation and Fine-Tuned Chat Models. (2023). arXiv:2307.09288Google ScholarGoogle Scholar
  271. Donald J. Treffinger. 1996. Creativity, Creative Thinking, and Critical Thinking: In Search of Definitions. Center for Creative Learning, Sarasota, FL.Google ScholarGoogle Scholar
  272. Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, and Felix Hill. 2021. Multimodal Few-Shot Learning with Frozen Language Models. In Advances in Neural Information Processing Systems (NeurIPS’21).Google ScholarGoogle Scholar
  273. Lewis Tunstall, Leandro von Werra, and Thomas Wolf. 2022. Natural Language Processing with Transformers. O’Reilly, Sebastopol, CA.Google ScholarGoogle Scholar
  274. Alan M. Turing. 1950. Computing Machinery and Intelligence. Mind LIX, 236 (1950), 433–460.Google ScholarGoogle ScholarCross RefCross Ref
  275. Aaron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. WaveNet: A Generative Model for Raw Audio. In Proc. of the 9th ISCA Workshop on Speech Synthesis.Google ScholarGoogle Scholar
  276. Aaron Van Den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. 2016. Pixel Recurrent Neural Networks. In Proc. of The 33rd International Conference on Machine Learning (ICML’16).Google ScholarGoogle Scholar
  277. Aaron Van Den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. 2016. Conditional Image Generation with PixelCNN Decoders. In Advances in Neural Information Processing Systems (NeurIPS’16).Google ScholarGoogle Scholar
  278. Aaron Van Den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural Discrete Representation Learning. In Advances in Neural Information Processing Systems (NeurIPS’17).Google ScholarGoogle Scholar
  279. Lav R. Varshney. 2019. Mathematical Limit Theorems for Computational Creativity. IBM Journal of Research and Development 63, 1 (2019), 2:1–2:12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  280. Lav R. Varshney, Florian Pinel, Kush R. Varshney, Debarun Bhattacharjya, Angela Schoergendorfer, and Y-Min Chee. 2019. A Big Data Approach to Computational Creativity: The Curious Case of Chef Watson. IBM Journal of Research and Development 63, 1 (2019), 7:1–7:18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  281. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems (NeurIPS’17).Google ScholarGoogle Scholar
  282. Dan Ventura. 2016. Mere Generation: Essential Barometer or Dated Concept?. In Proc. of the 7th International Conference on Computational Creativity (ICCC’16).Google ScholarGoogle Scholar
  283. Gauthier Vernier, Hugo Caselles-Dupré, and Peirre Fautrel. 2020. Electric Dreams of Ukiyo: A series of Japanese Artworks Created by an Artificial Intelligence. Patterns 1, 2 (2020), 100026.Google ScholarGoogle ScholarCross RefCross Ref
  284. Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, and Furu Wei. 2023. Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers. (2023). arXiv:2301.02111Google ScholarGoogle Scholar
  285. Zhengwei Wang, Qi She, and Tomás E. Ward. 2021. Generative Adversarial Networks in Computer Vision: A Survey and Taxonomy. Comput. Surveys 54, 2 (2021), 1–38.Google ScholarGoogle Scholar
  286. Manuel Watter, Jost Tobias Springenberg, Joschka Boedecker, and Martin Riedmiller. 2015. Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images. In Advances in Neural Information Processing Systems (NeurIPS’15).Google ScholarGoogle Scholar
  287. Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, Courtney Biles, Sasha Brown, Zac Kenton, Will Hawkins, Tom Stepleton, Abeba Birhane, Lisa Anne Hendricks, Laura Rimell, William Isaac,..., and Iason Gabriel. 2022. Taxonomy of Risks Posed by Language Models. In Proc. of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT’22).Google ScholarGoogle ScholarDigital LibraryDigital Library
  288. Max Welling and Yee Whye Teh. 2011. Bayesian Learning via Stochastic Gradient Langevin Dynamics. In Proc. of the 28th International Conference on International Conference on Machine Learning (ICML’11).Google ScholarGoogle Scholar
  289. Geraint A. Wiggins. 2006. Searching for Computational Creativity. New Generation Computing 24 (2006), 209–222.Google ScholarGoogle ScholarDigital LibraryDigital Library
  290. Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning 8(1992), 229–256.Google ScholarGoogle ScholarDigital LibraryDigital Library
  291. Ho-Hsiang Wu, Prem Seetharaman, Kundan Kumar, and Juan Pablo Bello. 2022. Wav2CLIP: Learning Robust Audio Representations from CLIP. In Proc. of the 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’22).Google ScholarGoogle ScholarCross RefCross Ref
  292. Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, and Josh Tenenbaum. 2016. Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling. In Advances in Neural Information Processing Systems (NeurIPS’16), Vol.  29.Google ScholarGoogle Scholar
  293. Shengqiong Wu, Hao Fei, Leigang Qu, Wei Ji, and Tat-Seng Chua. 2023. NExT-GPT: Any-to-Any Multimodal LLM. (2023). arXiv:2309.05519Google ScholarGoogle Scholar
  294. Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. 2018. Generating Adversarial Examples with Adversarial Networks. In Proc. of the 27th International Joint Conference on Artificial Intelligence (IJCAI’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  295. Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. 2021. VideoGPT: Video Generation using VQ-VAE and Transformers. (2021). arXiv:2104.10157Google ScholarGoogle Scholar
  296. Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. 2016. Attribute2Image: Conditional Image Generation from Visual Attributes. In Proc. of the 11th European Conference on Computer Vision (ECCV’16).Google ScholarGoogle ScholarCross RefCross Ref
  297. Kevin Yang and Dan Klein. 2021. FUDGE: Controlled Text Generation With Future Discriminators. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL’21).Google ScholarGoogle ScholarCross RefCross Ref
  298. Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Wentao Zhang, Bin Cui, and Ming-Hsuan Yang. 2023. Diffusion Models: A Comprehensive Survey of Methods and Applications. Comput. Surveys 56, 4 (2023), 39.Google ScholarGoogle Scholar
  299. Xiaoyuan Yi, Maosong Sun, Ruoyu Li, and Wenhao Li. 2018. Automatic Poetry Generation with Mutual Reinforcement Learning. In Proc. of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP’18).Google ScholarGoogle ScholarCross RefCross Ref
  300. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. In Proc. of the 31st AAAI Conference on Artificial Intelligence (AAAI’17).Google ScholarGoogle ScholarCross RefCross Ref
  301. Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. 2019. Self-Attention Generative Adversarial Networks. In Proc. of the 36th International Conference on Machine Learning (ICML’19).Google ScholarGoogle Scholar
  302. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. OPT: Open Pre-trained Transformer Language Models. (2022). arXiv:2205.01068Google ScholarGoogle Scholar
  303. Xingxing Zhang and Mirella Lapata. 2014. Chinese Poetry Generation with Recurrent Neural Networks. In Proc. of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP’14).Google ScholarGoogle ScholarCross RefCross Ref
  304. Yizhe Zhang, Zhe Gan, and Lawrence Carin. 2016. Generating Text via Adversarial Training. In Proc. of the NeurIPS’16 Workshop on Adversarial Training.Google ScholarGoogle Scholar
  305. Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. 2017. Adversarial Feature Matching for Text Generation. In Proc. of the 34th International Conference on Machine Learning (ICML’17).Google ScholarGoogle ScholarDigital LibraryDigital Library
  306. Yanyi Zhang, Xinyu Li, Chunhui Liu, Bing Shuai, Yi Zhu, Biagio Brattoli, Hao Chen, Ivan Marsic, and Joseph Tighe. 2021. VidTr: Video Transformer Without Convolutions. In Proc. of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV’21).Google ScholarGoogle ScholarCross RefCross Ref
  307. Shengjia Zhao, Jiaming Song, and Stefano Ermon. 2017. Towards Deeper Understanding of Variational Autoencoding Models. (2017). arXiv:1702.08658Google ScholarGoogle Scholar
  308. Yunpu Zhao, Rui Zhang, Wenyi Li, Di Huang, Jiaming Guo, Shaohui Peng, Yifan Hao, Yuanbo Wen, Xing Hu, Zidong Du, Qi Guo, Ling Li, and Yunji Chen. 2024. Assessing and Understanding Creativity in Large Language Models. (2024). arXiv:2401.12491Google ScholarGoogle Scholar
  309. Tao Zhou, Chen Fang, Zhaowen Wang, Jimei Yang, Byungmoon Kim, Zhili Chen, Jonathan Brandt, and Demetri Terzopoulos. 2018. Learning to Sketch with Deep Q Networks and Demonstrated Strokes. (2018). arXiv:1810.05977Google ScholarGoogle Scholar
  310. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proc. of the 2017 IEEE International Conference on Computer Vision (ICCV’17).Google ScholarGoogle ScholarCross RefCross Ref
  311. Brian D. Ziebart, Andrew Maas, J. Andrew Bagnell, and Anind K. Dey. 2008. Maximum Entropy Inverse Reinforcement Learning. In Proc. of the 23rd AAAI Conference on Artificial Intelligence (AAAI’08).Google ScholarGoogle Scholar
  312. Andrea Zugarini, Stefano Melacci, and Marco Maggini. 2019. Neural Poetry: Learning to Generate Poems Using Syllables. In Proc. of the 29th International Conference on Artificial Neural Networks (ICANN’19).Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Creativity and Machine Learning: A Survey

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Computing Surveys
        ACM Computing Surveys Just Accepted
        ISSN:0360-0300
        EISSN:1557-7341
        Table of Contents

        Copyright © 2024 Copyright held by the owner/author(s).

        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Online AM: 11 May 2024
        • Accepted: 23 April 2024
        • Revised: 26 February 2024
        • Received: 6 April 2021

        Check for updates

        Qualifiers

        • survey
      • Article Metrics

        • Downloads (Last 12 months)87
        • Downloads (Last 6 weeks)87

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader