Mood Themes the World
DOI:
https://doi.org/10.25038/am.v0i29.556Keywords:
apparatus; theme; mood; artificial intelligence; electracy.Abstract
Apparatus theory (a hybrid of McLuhan and Derrida) hypothesizes that a civilization of electracy (the digital apparatus) must learn how to thrive in a lifeworld in which the visceral faculty of appetite is hegemonic. The dominant axis of behavior today is fantasy-anxiety (attraction/repulsion). We propose that world theming has created a vernacular discourse that may be raised to a second power of expression as vehicle of visceral intelligence. The immediate claim is that theming in digital media augments mood (ambiance) into a power of imagination, just as dialectic in writing augmented logic into a power of reason. Fantasy today is persuasive, just as logical entailment is (was) in the rational order of literacy. Decisions determining real events today are being made in worlds of mood.
World theming is evident in the vernacular art practices arising from recent advances in artificial intelligence. The availability of commodity GPUs, along with public access to advanced research via GitHub, Kaggle, Hugging Face, and the proliferation of forums such as Reddit, Discord, YouTube, and others, has resulted in a renaissance of public engagement with technology-informed creative practice. In addition, the general availability of Google's previously internal-only development tool, Colab, in late 2017 provided access to cloud-based GPUs and storage systems accessible only to data scientists and academics.
In early 2021 Ryan Murdock released a Colab notebook called Big Sleep that combined OpenAI's recently published Contrastive Language-Image Pre-training (CLIP) with BigGAN. This model is a paradigmatic example of our observation. By early 2022, multiple derivations of this process incorporated alternative image generation techniques. This paper will demonstrate how the fundamental basis of these methods are distinctly electrate in their use of ‘theme’ and emphasis on ‘mood’ in world-building, including a case-study animation called Dissipative Off-ramps.
References
Brock, Andrew, Jeff Donahue, and Karen Simonyan. “Large Scale GAN Training for High Fidelity Natural Image Synthesis.” ArXiv 1809.11096 [Cs, Stat], 2019. http://arxiv.org/abs/1809.11096. Accessed on March 6, 2023.
Crawford, Kate. Atlas of Ai: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale: Yale University Press, 2021.
Crowson, K., D. Russell, Dango233, M. Shepherd, C. Allen, C. Nri, Somnai, A. Letts, & Gandamu. Disco Diffusion [Jupyter Notebook]. alembics. https://github.com/alembics/disco-diffusion (Original work published 2022). Accessed on March 6, 2023.
Dayma, Boris, Suraj Patil, P. Cuenca, Khalid Saifullah, Tanishq Abraham, Phúc Lê Khắc, L. Melas, and Ritobrata Ghosh. DALL·E Mini, 2021. https://doi.org/10.5281/zenodo.5146400
Dhariwal, Prafulla and Alex Nichol. “Diffusion Models Beat GANs on Image Synthesis.” ArXiv 2105.05233 [Cs, Stat], 2021. http://arxiv.org/abs/2105.05233. Accessed on March 6, 2023.
Dosovitskiy, Alexey, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.” ArXiv 2010.11929, 2021. https://doi.org/10.48550/arXiv.2010.11929
Esser, Patrick, Robin Rombach, and Björn Ommer. “Taming Transformers for High-Resolution Image Synthesis.” ArXiv, 2012.09841 [Cs], 2021. http://arxiv.org/abs/2012.09841. Accessed on March 6, 2023.
Harmeet. Disco Diffusion 70+ Artist Studies | Weird Wonderful AI Art. https://weirdwonderfulai.art/resources/disco-diffusion-70-plus-artist-studies/, February 26, 2022. Accessed on March 6, 2023.
Harmeet. Anything Punk Modifiers for AI Art | Weird Wonderful AI Art. https://weirdwonderfulai.art/resources/anything-punk-modifiers-for-ai-art/, March 7, 2022. Accessed on March 6, 2023.
Harmeet. Disco Diffusion Modifiers | Weird Wonderful AI Art. https://weirdwonderfulai.art/resources/disco-diffusion-modifiers/. March 25, 2022. Accessed on March 6, 2023.
He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep Residual Learning for Image Recognition.” ArXiv 1512.03385, 2015. https://doi.org/10.48550/arXiv.1512.03385
HiPerGator – Research Computing – University of Florida. (n.d.). https://www.rc.ufl.edu/about/hipergator/. Accessed on July 24, 2022.
Jack Stenner (Director). Dissipative Off-ramps (2022) by Jack Stenner. https://www.youtube.com/watch?v=t3HTwt3uInQ. July 26, 2022.
Kantayya, S. (Director). Coded Bias, 2020. https://www.codedbias.com. Accessed on March 6, 2023.
Karras, Tero, Samuli Laine, and Timo Aila. “A Style-Based Generator Architecture for Generative Adversarial Networks.” ArXiv 1812.04948; Version 3, 2019. https://doi.org/10.48550/arXiv.1812.04948
Lecun, Y., L. Bottou, Y. Bengio, & P. Haffner. “Gradient-based learning applied to document recognition.” Proceedings of the IEEE 86, 11 (1998): 2278–2324. https://doi.org/10.1109/5.726791
Lin, Tsung-Yi, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Dollár. “Microsoft COCO: Common Objects in Context.” ArXiv 1405.0312, 2015. https://doi.org/10.48550/arXiv.1405.0312
Mansimov, Elman, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. “Generating Images from Captions with Attention.” ArXiv 1511.02793, 2016. http://arxiv.org/abs/1511.02793. Accessed on March 6, 2023.
Mordvintsev, Alexander. “Inceptionism: Going Deeper into Neural Networks.” Google AI Blog, June 17, 2015. http://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html. Accessed on March 6, 2023.
Radford, Alec et al. “Learning Transferable Visual Models from Natural Language Supervision.” ArXiv 2103.00020 [Cs], 2021. http://arxiv.org/abs/2103.00020. Accessed on March 6, 2023.
Ramesh, Aditya, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. “Hierarchical Text-Conditional Image Generation with CLIP Latents.” ArXiv 2204.06125, 2022. http://arxiv.org/abs/2204.06125. Accessed on March 6, 2023.
Ridnik, Tal, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor. “ImageNet-21K Pretraining for the Masses.” ArXiv 2104.10972, 2021. https://doi.org/10.48550/arXiv.2104.10972
Wang, X., Xie, L., Dong, C., & Shan, Y. “Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.” International Conference on Computer Vision Workshops (ICCVW), 2021.
Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (First Trade Paperback Edition). PublicAffairs, 2020.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 AM Journal of Art and Media Studies
This work is licensed under a Creative Commons Attribution 4.0 International License.
The content on this site is licensed under a Creative Commons Attribution 4.0 International License.
AM Journal of Art and Media Studies ISSN 2217-9666 - printed, ISSN 2406-1654 - online, UDK 7.01:316.774
Contact: amjournal@outlook.com
Publisher: Faculty of Media and Communications, Singidunum University, Belgrade, Serbia
Indexed in: ERIH PLUS, EBSCO, DOAJ, and in The List of Scientific Journals Categorization of Ministry of Education, Science and Technological Development of Republic of Serbia (M24 in 2021; M23 in 2023). Beginning with No. 12 2017, AM is indexed, abstracted and covered in Clarivate Analytics service ESCI.