Neural Networks and Digital Arts: Some Reflections
by Rômulo Augusto Vieira Costa 1,* , Flávio Luiz Schiavoni 2
Arts Lab in Interfaces, Computers, and Everything Else – ALICE, Federal University of São João del-Rei, UFSJ,
São João del-Rei, Minas Gerais, Brazil
* Author to whom correspondence should be addressed.
Journal of Engineering Research and Sciences, Volume 1, Issue 1, Page # 10-18, 2022; DOI: 10.55708/js0101002
Keywords: Computer music, Generational art, Neural networks
Received: 20 January 2022, Revised: 25 January 2022, Accepted: 06 February 2022, Published Online: 24 February 2022
AMA Style
Costa RAV, Schiavoni FL. Neural Networks and Digital Arts: Some Reflections. Journal of Engineering Research and Sciences. 2022;1(1):10-18. DOI: 10.55708/js0101002
Chicago/Turabian Style
Costa, Rômulo Augusto Vieira, and Flávio Luiz Schiavoni. “Neural Networks and Digital Arts: Some Reflections.” Journal of Engineering Research and Sciences 1, no. 1 (2022): 10–18. DOI: 10.55708/js0101002
IEEE Style
R. A. V. Costa and F. L. Schiavoni, “Neural Networks and Digital Arts: Some Reflections,” Journal of Engineering Research and Sciences, vol. 1, no. 1, pp. 10–18, 2022. DOI: 10.55708/js0101002
The Constant advancement in the area of machine learning has unified some areas that until then di a of computing with the arts in general. With the emergence of digital art, people have become increasingly interested in the development of expressive techniques and algorithms for creating works of art, whether in the form of music, image, aesthetic artifacts, or even combinations of these forms, usually being applied in an interactive technology installation. Due to their high diversity of creation and complexity during processing, neural networks have been used to create these digital works, which present results that are difficult to reproduce by human hand and are usually presented in museums, conferences, or even at auctions, being sold at high prices. The fact that these works are gaining more and more recognition in the art scene, ended up raising some questions about authenticity and art. In this way, this work aims to address the historical context regarding the advancement of the area of machine learning, addressing the evolution of neural networks in this field, about what art would be and who would be the artist responsible for digital work, given that despite After performing a good part of the creation process, the computer does not perform the entire process, becoming dependent on the programmer, who in turn is responsible for defining parameters, techniques and, above all, concepts that will attribute all the aesthetic value to the work. From this point of view and the growing interest in the generation of art via computers, the present work presents applied research around neural network techniques and how they can be applied in artistic practice, either generating visual elements or generating visual elements or generating sound elements. Finally, perspectives for the future are presented and how this area can evolve even further.
- E. D. Liddy, “Natural language processing”, Encyclopedia of Library and Information Science, 2001.
- B. Buchanan, “A (very) brief history of artificial intelligence.”, AI Magazine, vol. 26, pp. 53–60, 2005.
- R. L. de Mántaras, “Artificial intelligence and the arts: Toward computational creativity”, https://www.bbvaopenmind.com/en/articles/artificial- intelligence-and-the-arts-toward-computational-creativity/, 2020-05-17.
- G. Kogan, “Machine learning for artists”, https://medium.com/@genekogan/machine-learning-for-artists- e93d20fdb097#.xc9qk7ca7, 2020-05-17.
- J. Mccarthy, “History of lisp”, ACM SIGPLAN Notices, vol. 13, 1996, doi:10.1145/960118.808387.
- P. Hamel, D. Eck, “Learning features from music audio with deep belief networks.”, “ISMIR”, vol. 10, pp. 339–344, Utrecht, The Nether- lands, 2010.
- E. Martineli, “Extração de conhecimento de redes neurais artificiais.”, Ph.D. thesis, Universidade de São Paulo, 1999.
- Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, “Gradient-based learning applied to document recognition”, Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
- S. Naveed, G. G, L. S, “Early Diabetes Discovery From Tongue Images”, The Computer Journal, 2020, doi:10.1093/comjnl/bxaa022, bxaa022.
- S. Naveed, G. Mohan, “Intelligent diabetes detection system based on tongue datasets”, Current Medical Imaging Reviews, vol. 14, 2018, doi:10.2174/1573405614666181009133414.
- A. C. G. Vargas, A. Paes, C. N. Vasconcelos, “Um estudo sobre redes neurais convolucionais e sua aplicação em detecção de pedestres”, “Proceedings of the XXIX Conference on Graphics, Patterns and Images”, vol. 1, 2016.
- D. S. Academy, “Introdução às redes adversárias gen- erativas (gans – generative adversarial networks)”,
as-redes-adversarias- versarial-networks/, 2018, 2020-05-18. - J. Glover, “An introduction to generative adversarial networks”, https://blog.aylien.com/introduction-generative-adversarial- networks-code-tensorflow/, 2018, 2020-05-18.
- X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, C. C. Loy, Y. Qiao, X. Tang, “Esrgan: Enhanced super-resolution generative adversarial networks”, “Proceedings of the European Conference on Computer Vision (ECCV)”, pp. 0–0, 2018.
- G. Hinton, A. Krizhevsky, S. Wang, “Transforming auto-encoders”, “21st International Conference on Artificial Neural Networks”, vol. 6791, pp. 44–51, 2011.
- J. Jordan, “Introduction to autoencoders”, https://www.jeremyjordan.me/autoencoders/, 2018, 2020-05- 18.
- D. Kingma, M. Welling, “An introduction to variational autoencoders”, Foundations and Trends® in Machine Learning, vol. 12, pp. 307–392, 2019, doi:10.1561/2200000056.
- X. Li, J. She, “Collaborative variational autoencoder for recommender systems”, “23rd ACM SIGKDD International Conference”, pp. 305– 314, 2017, doi:10.1145/3097983.3098077.
- M. Kauw-A-Tjoe, “Generation of abstract geometric art based on exact aesthetics, gestalt theory and graphic design principles.”, Journal of Computational and Applied Mathematics – J COMPUT APPL MATH, 2005.
- A. Organization, “Timeline of ai art”, https://aiartists.org/ai-timeline- art, 2020-05-17.
- G. Assayag, C. Rueda, M. Laurson, C. Agon, O. Delerue, “Computer- assisted composition at ircam: From patchwork to openmusic”, Com- puter Music Journal, vol. 23, no. 3, pp. 59–72, 1999.
- J. Bresson, C. Agon, “Visual programming and music score generation with openmusic”, “2011 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)”, pp. 247–248, IEEE, 2011, doi:10.1109/VLHCC.2011.6070415.
- A. Colmerauer, P. Roussel, “The birth of prolog”, “History of program- ming languages—II”, pp. 331–367, 1996, doi:10.1145/234286.1057820.
- E. M. Miletto, L. L. Costalonga, L. V. Flores, E. F. Fritsch, M. S. Pimenta, R. M. Vicari, “Introdução à computação musical”, “IV Congresso Brasileiro de Computação”, sn, 2004.
- J. D. Eisenberg, Svg Essentials, O’Reilly Media, 2014.
- P. M. Brossier, “Automatic annotation of musical audio for interactive applications”, Ph.D. thesis, University of London, 2006.
- J. Bullock, U. Conservatoire, “Libxtract: a lightweight library for audio feature extraction.”, “ICMC”, 2007.
- G. Tzanetakis, P. Cook, “Marsyas: A framework for audio analysis”, Organised sound, vol. 4, no. 3, pp. 169–175, 2000.
- C. McKay, R. Fiebrink, D. McEnnis, B. Li, I. Fujinaga, “Ace: A frame- work for optimizing music classification.”, “ISMIR”, pp. 42–49, 2005.
- X. Amatriain, J. Massaguer, D. Garcia, I. Mosquera, “The clam anno- tator: A cross-platform audio descriptors editing tool.”, “ISMIR”, pp. 426–429, 2005.
- C. McKay, I. Fujinaga, “jmir: Tools for automatic music classification.”, “ICMC”, 2009.
- C. Cannam, C. Landone, M. Sandler, “Sonic visualiser: An open source application for viewing, analysing, and annotating music audio files”, “Proceedings of the 18th ACM international conference on Multimedia”, pp. 1467–1468, ACM, 2010.
- O. Lartillot, P. Toiviainen, T. Eerola, “A matlab toolbox for music infor- mation retrieval”, “Data analysis, machine learning and applications”, pp. 261–268, Springer, 2008.
- J. F. Gemmeke, “Audioset”, .
- M. Team, “Common voice”, .
- L. Rosa, “Million song dataset”, .
- F. Gouyon, “Ball room”, .
- N. Team, “Nsynth dataset”, .
- M. Matsugu, K. Mori, Y. Mitari, Y. Kaneda, “Subject independent facial expression recognition with robust face detection using a convo- lutional neural network”, Neural Networks, vol. 16, no. 5-6, pp. 555–559, 2003.
- H. R. Guimarães, “Recuperação de informações musicais: Uma abor- dagem utilizando deep learning”, 2018.
- C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, “Going deeper with convolutions”, “Proceedings of the IEEE conference on computer vision and pattern recognition”, pp. 1–9, 2015.
- C. Wong, “The rise of ai supermodels”, https://www.cdotrends.com/story/14300/rise-ai-supermodels, 2020-05-18.
- K. Schawinski, C. Zhang, H. Zhang, L. Fowler, G. K. Santhanam, “Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit”, Monthly Notices of the Royal Astronomical Society: Letters, vol. 467, p. slx008, 2017, doi:10.1093/mnrasl/slx008.
- A. Roberts, J. Engel, C. Raffel, I. Simon, C. Hawthorne, “Music- vae: Creating a palette for musical scores with machine learning”, https://magenta.tensorflow.org/music-vae, 2020-04-21.
- W. Sharber, “Musicvae: A tool for creating music with neural networks”, https://medium.com/@wvsharber/musicvae-a-tool-for- creating-music-with-neural-networks-db0f4b84a698, 2020-04-20.
- M. Dinculescu, J. Engel, A. Roberts, eds., MidiMe: Personalizing a MusicVAE model with user data, 2019.
- A. Roberts, J. Engel, C. Raffel, C. Hawthorne, D. Eck, “A hierarchi- cal latent vector model for learning long-term structure in music”, International Conference on Machine Learning, 2018.
- M. Domingues, S. Rezende, “The impact of context-aware rec- ommender systems on music in the long tail”, “Brazilian Confer- ence on Intelligent Systems, BRACIS 2013”, pp. 119–124, 2013, doi: 10.1109/BRACIS.2013.28.
- J. Valverde-Rebaza, A. Soriano Vargas, L. Berton, M. C. Oliveira, Lopes, “Music genre classification using traditional and relational approaches”, “Brazilian Conference on Intelligent Systems, BRACIS 2013”, 2014, doi:10.1109/BRACIS.2014.54.
- I. Nunes, G. Leite, D. Figueiredo, “A contextual hierarchical graph model for generating random sequences of objects with application to music playlists”, “Brazilian Conference on Intelligent Systems, BRACIS 2013”, 2019.
- P. Moura da Silva, C. Mattos, A. Júnior, “Audio plugin recommenda- tion systems for music production”, “Brazilian Conference on Intelli- gent Systems, BRACIS 2013”, 2019, doi:10.1109/BRACIS.2019.00152.
- B. Calais, “A inteligência artificial invade o mundo da arte”, https://forbes.com.br/forbeslife/2020/01/a-inteligencia-artificial- invade-o-mundo-da-arte/, 2020, 2020-05-25.
- D. Kaufman, “Dá pra fazer arte com inteligência artificial?”, https://epocanegocios.globo.com/colunas/IAgora/noticia/2019/08/da- pra-fazer-arte-com-inteligencia-artificial.html, 2020, 2020-05-25.
- J. Pearson, “Uma obra de arte gerada por ia foi vendida num leilão por us$ 432 mil”, https://www.vice.com/pt_br/article/43ez3b/uma- obra-de-arte-gerada-por-ia-foi-vendida-num-leilao-por-usdollar- 432-mil, 2020, 2020-05-25.
- F. Gonzaga, M. Fernandes, “Inteligência artifi- cial é o próximo passo na evolução da arte?”,/estadaoqr/materia/inteligencia–o-proximo-passo-na-evolucao-da-arte, 2020, 2020-05-25.