By Rosemary Lee
Chapter 1 of Algorithm, Image, Art
Introduction
Algorithms play an increasingly influential role in the production, circulation, and interpretation of images, a shift that has complex implications for art, the humanities, and visual culture. Machine learning has become pervasive beyond the technical research contexts it emerged from and had previously been largely confined to, finding a wide variety of visual applications from the direct generation of images to influencing the display of visual content based on the large-scale collection and analysis of data. The highly automated performance of visual processing tasks by machines allows digital aesthetics to be informed by algorithms, statistical models, and data. As a result, images are increasingly defined by their engagement with algorithms, which structure them aesthetically, processually, and semantically in ways that often exceed description in terms of direct human perception, agency, and understanding, while being to a great extent informed by and entangled with these. In this sense, recent technical developments such as the computational generation of images using machine learning systems tap into long-running theoretical challenges regarding the non-visual, immaterial, and non-human aspects of art, images, and visual media. This book examines how the algorithmic structuring of images may offer new ways of understanding recent technical developments and their surrounding discourses. Through close examination of relevant instances from the history of visual technologies, we consider how current discourses surrounding nascent forms of image-making may in some cases disrupt while in others reinforcing established conventions in thinking about visual media.
Visual technologies that emphasize algorithms, statistical models, and data reshape not only the way images appear but also how they behave and what significance they have. While they may be nearly ubiquitous, algorithms often play a cryptic role in visual media, creating a distinct gap between the processes and data behind the visible surface of an image. We may understand the recent predominance of machine learning as a paradigm of image-making as an algorithmic turn (Ulricchio, 2011) in visual media, prioritizing data and the performance of algorithmic processes over the visual qualities of what is output by a given system. In combination with a high degree of technical opacity, this throws the complex relationships between the visual and nonvisual aspects of algorithmic forms of imaging into contention with traditional perspectives that assume a close referential correspondence between images and real-world phenomena.
The growing technical capacities of algorithmic visual media have attracted the attention of artists, theorists, and institutions curious to understand what implications this may have for art, visual media, society, and digital culture, which has resulted in a wave of practical experimentation and theoretical inquiry on this topic. Artists’, theorists’, and artist-theorists’ explorations have yielded new ways of thinking about images, visual media, and the cultural role of technology, with contributions taking the form of art projects, technical experiments, and theoretical texts approaching from various perspectives. Critically engaged discussions on machine learning have pointed out ethically problematic assumptions, internal logic, and extractivist tendencies embedded in the very foundations of the procurement, management, and implementation of data-based approaches to visual media.
The development of machine learning tools that are accessible to non-experts has given machine learning and artificial intelligence widespread use and visibility in the general public. One effect this has is that it enables users to create images in new ways than had been possible previously without significant technical knowledge, to some extent domesticating the technology, but also in some cases revealing its limitations in terms of technical capacities, the tendency towards difficult to remediate error and bias, and conceptual framings that often reiterate existing clichés such as robot apocalypse or AIs as artists. While the recent popularity of intersections between artificial intelligence and art has become a lively sphere for experimentation and discussion, many aspects continue to pose significant theoretical challenges. A contributing factor to difficulties in critically grappling with this area of research is that the processes that go on within machine learning systems are often opaque to human understanding, even to those who design, build, and operate them. Rapidly growing technical affordances, corporations’ active attempts to conceal the inner workings of their technologies, and the ideas attached to the term artificial intelligence have a tendency to obscure what is at stake in the use of highly automated visual systems to create and interpret images.
Certain conceptual ambiguities emerge from machine learning’s historical association with artificial intelligence. This includes a recurring reliance on cybernetic metaphors, drawing comparisons between biological and ecological systems to the modalities at work in technical systems. Likening computers to human brains can indeed help to illustrate how the tasks performed by machines may share related attributes with cognition, reasoning, and intelligence. However such analogies often lend themselves more to opacity than clarity, contributing to inaccuracies as well as reinforcing dangerous biases within technical systems. Beyond the misconceptions that machine learning brings with it, there is also a long history of skepticism towards the use of new technologies in art, often centered on familiar themes such as the threat they may pose to the role of the (human) author, or a reversal on this premise, treating the machine as an artist or author.
In attempting to grapple with novel aspects in contemporary art and visual media that have resulted from the pervasive influence of algorithms, we are confronted with existing conventions in thinking about images that have accompanied nascent visual technologies in the past. Current forms of visual media may in some respects depart from older paradigms, for example by affording new levels of acceleration and automation, and introducing new modalities into both the creation and interpretation of visual content. Yet the images generated by machine learning systems are not entirely distinct from other visual paradigms, allowing a heterogeneity of image paradigms to coexist across a range of different media. And, importantly, algorithmic qualities, processes, and structural influences can be found even in much earlier periods than the contexts we are most familiar with today.
Recent developments contribute to the emergence of new perspectives that often defy traditional conceptions of images. In contrast to defining images as primarily visual, materially individuated objects, whose value derives from the investment of human labor and intellect, media artifacts are increasingly understood as processual, ontologically ambiguous, and governed by programmed machines. These qualities and the historical threads that they draw upon are far from linear, disrupting conceptions of the history of technology as a flow of successive developments and rather affording a heterogeneity of different attributes, imaging paradigms, and conceptual associations to coexist within a single image.
The progression from the use of relatively simple algorithmic methods to structure the visual composition of an image towards the more complex, automated, and artificially creative systems that are commonplace today has taken place through diverse and often seemingly unconnected instances across a span of many centuries. Approaching the topic of algorithmic visual media from this perspective enables us to find commonalities between recent approaches and diverse examples such as the execution of images through the systematic use of pre-defined sets of written instructions, geometry, optics, or mechanical automation that have in various ways led up to or informed the present context. In such cases, algorithmic techniques may be performed by hand or using simple technical apparatus in place of the highly automated processes that are now performed by digital computers, but the underlying principles and modalities employed bear resemblances to one another that may offer insights into current artifacts, practices, and ideas.
While the role of technology in image-making and in art has been extensively discussed, it remains an area in which there is little consensus, even on foundational issues such as answering the seemingly simple question “What is an image?” notably problematized by W. J. T. Mitchell (1986). Images, algorithms, and art each entail a philosophical slipperiness in their own right, often proving more easily identified than they are to define. Images — as algorithms and art — are transmedial, transcending beyond instantiation in any particular, material form, something that is emphasized in their performance according to algorithmic constraints and procedures. This adds to the existing difficulty of developing coherent image ontologies, as it problematizes attempts at clearly defining what an image is, or what it is not, for that matter. The ephemeral nature of images also challenges us to find similarities in modality across what may be at least superficially diverse forms of media and approaches to image-making in a way that cuts against the grain of conventions that tend to segment visual culture according to linear chronologies, taxonomies of media, and disciplinary divisions.
This work seeks to unpack the interrelations of algorithms, images, and art. Drawing from the perspectives of art history and media archaeology, it builds upon the central insights developed during the course of my Ph.D. research Machine Learning and Notions of the Image (Lee, 2020). Considering how notions of the image have been influenced by the rise of machine learning as an imaging paradigm, I seek to situate current discussions of algorithmic media in relation to relevant historical tendencies that have greatly shaped not only visual technologies but also importantly, how ideas about those developments have become embedded in ongoing discourse. While this exploration explicitly focuses on topics within art and art history, it actively seeks to touch on the ideas behind, in, and associated with technologies of visualization and how they have developed over time, more so than focusing solely on their particular instantiation. This is not to say that current methods will soon lose their relevance due to the rapid pace at which technologies and the discourses around them change. Rather, by examining how tendencies have developed over time, we may draw insights into what ways they may progress in the future. Another reason for focusing mostly on theory art historical examples is that this book aims to provide a roadmap of sorts that can be of use to future research and artistic practice. I believe that art cannot be made while limited solely to thinking about art and find that often conceptual infrastructure is of greater use than the sole analysis of particular, contemporary instances.
This investigation of the intersections of algorithms, images, and art seeks to dig into aspects of the history of ideas that are under-recognized in contemporary discourse. While this work has emerged from research specifically into the use of machine learning in contemporary art, it also encourages readers to reflect on the contexts and narratives surrounding technologically engaged art and media artifacts, more so than fixating on any one particular technology, approach, or visual paradigm in itself.
Algorithmic forms of visual media cut across a variety of different media from diverse time periods. This holds vital implications for contemporary contexts in which algorithmic approaches such as machine learning and artificial intelligence, in general, have become extremely influential in image-making as well as how developments in these fields are theorized. Considering this phenomenon in terms of its significance to art contexts offers a view of contemporary practices that engage critically in what cultural significance that algorithmic visual media may have, now as well as in the future. Comparing current ideas and methods to those of the past not only grounds this study in relation to the long history of algorithmic media, but also compels us to question certain assumptions that have become deeply engrained in contexts surrounding art, technology, and conceptions of the image.
This research engages questions about the role of technology in the production of images and art, while also touching on a number of interrelated issues, including the mediation of perception that occurs through images, the automation of production processes, and how visual culture is read differently as a result of these. Algorithms ultimately have a structuring influence, aesthetically, processually, and as a result, conceptually, on the image. Organizing the performance of image-making processes according to pre-defined constraints has several effects, including enabling the production of images to be guided by a formal set of instructions and rules. This, in turn, facilitates the storage, transmission, repeatability, and iterability of the algorithmic image, contributing not only to the proliferation of images but also allowing a degree of consistency to be maintained within serially produced images.
Another key feature of algorithmic image-making is that it lends itself to automation, enabling machines to be programmed to perform or execute algorithmic procedures in place of humans. Shifting perspectives on relationships between the technical mediation of human vision, images, and real-world phenomena have led to an increasing emphasis on images as both a form of and based on visual data. This ultimately throws the empirical basis of data-based images into contention, something that has also been considered in discussions around earlier technological paradigms of image-making than those of today. The adoption of advanced visual technologies in artistic practices also raises difficult to remediate issues concerning the extent to which technical affordances of these systems and their embedded biases, limitations, and worldviews are challenged or rather perpetuated in such strategies of appropriation.
The image acts as an interface between the visual and the non-visual, between human and machinic intentionality, and between making and interpreting images. These various points of flexion are made especially apparent in the modalities of forms of media that employ machine learning, exploring the malleability of the boundaries between visual and non-visual, between human and machine, and between the processes entailed in the production of images and their interpretation. In order to reflect the nuanced nature of its topic, this book adopts a fairly non-linear approach, exploring intricate webs of association between technical processes and modalities in image production and their surrounding discourses. Each chapter seeks to address a slice of the issues connected with this topic from a different angle rather than progressing in strictly chronological order. Pulling at constituent threads, we unravel some of the entanglement of image, non-image, art and not art, human and machine, and vision and process.
Chapter 1 Algorithmic Image Production introduces the topic of the recent tendency towards the increasing use of algorithmic methods such as machine learning in image production. It describes how the generation of images using machine learning has come to occupy the interest of theorists and practitioners across multiple fields including contemporary art, media studies, and computer science and it presents the premise that recent technical developments in the production of images draw on discourse from the history of art and visual media, as well as theories of the image.
In Chapter 2 Approximation we begin by looking at analog forms of algorithmic image-making methods that facilitate an understanding of similar processes at work in the highly automated systems in use today. As a starting point, we examine an ancient method of cartography that enabled the transcription of maps in written form and several other examples in which textual instructions for the production of images were formulated in terms of geometric, proportional relationships. Considering this example through the concept of the softimage (Hoelzl and Marie, 2015), we consider how approaches to image-making based on the implementation of simple, analog sets of instructions set the stage for thinking about significant aspects in contemporary forms of algorithmic image-making. This opens up several interrelated threads that are picked up in subsequent chapters, looking at how algorithmic approaches may contribute to images’ capacity for latency, automation by machines, and the embedding of optical relationships within the image plane.
The following chapter, Transcription discusses how the transcodability of images into alphanumeric form structures the process of image-making and endows them with the quality of transmediality. Creating images according to sets of algorithmic instructions enables them to exist in latent, unarticulated, form, to be reproduced, or to be iterated upon, and to exist across a range of media, aspects that hold implications for assessments of the cultural and economic value of images as cultural products, but also for the challenge of establishing image ontologies. Through procedural practices (Carvalhais, 2016) and the dematerialization of the art object (Lippard, 1973) in conceptual art, we discuss how an understanding of the processes involved in the production of a work of art — or an image — came to be seen as contributing to one’s evaluation of cultural artifacts.
Chapter 4 Automation discusses how technological developments have given rise to a reckoning with the value of human labor that may be displaced by the automation of image production. An emphasis on process has been significant to the use of procedural methods in movements such as Surrealist automatism, and how these contributed to the early development of generative strategies in art. This leads us to consider how ideas about relationships between human and machine visual interpretation and expressions of agency have been powerful factors in value judgments in art, images, and visual media, in general.
Chapter 5 Alignment looks at the positioning of the human point of view relative to images. First it examines early methodologies and apparatuses for the incorporation of optical principles into the production of images in forms of what Friedrich Kittler refers to as optical media (1999). It then considers the optical paradigm in terms of the idea of the image as an accurate reflection of the world and what implications this has for understandings of the mediation of perception that occurs through technological forms of image-making.
The following chapter, Operation delves further into the tension between the visual and the processual aspects of images which arises from the algorithmic formulation of images. We look into this through Harun Farocki’s operational image (2004) and several other related theories concerning the visual, non-visual, and processual aspects of images. Through this concept, we examine the idea of the image as something that is enacted and acts on the world, rather than strictly representing it.
In Chapter 7 Refraction we look at several historical examples in which composite images are made through the combination of multiple individual images. Here we consider parallels between the use of such an approach and the complex processes involved in machine learning systems. This again raises the issue of treating images and data as interchangeable. On the one hand, such instances often entail associations between technical and scientific methods and presumed degrees of inherent truthfulness that result from their application in image-making, while on the other, the synthesis performed in producing composite images often plays a significant role in shaping the results. Considering this through historical discourse on photographic media and through Lorraine Daston and Peter Galison’s (2007) work on various forms of visual objectivity in scientific imaging, we explore a range of different perspectives on the mediating role of technical forms of visual representation.
The concluding chapter Distortion examines how the automation of visual processing tasks may inform the interpretation of the resulting images. Drawing on the potential for divergence between visual aesthetics of images and the data and processes that lie “behind” or “below” the visible, we discuss how situations of error enable us to see otherwise invisible aspects of visual media.
References
Bianco, Jamie “Skye.” “Algorithm.” In Posthuman Glossary, edited by Rosi Braidotti and Maria Hlavajova, 24. London: Bloomsbury, 2018.
Farocki, Harun. “Phantom Images.” Public 29 (2004): 12–22.
Hoelzl, Ingrid, and Rémi Marie. “From Softimage to Postimage.” Leonardo 50, no. 1 (2017): 72–73.
Kittler, Friedrich. “The Finiteness of Algorithms.” Presented at the transmediale festival, March 2, 2007.
Lund, Jacob. “Questionnaire on the Changing Ontology of the Image.” The Nordic Journal of Aesthetics 30 (July 2021): 6–7.
Mitchell, Melanie. Artificial Intelligence: A Guide for Thinking Humans. London: Penguin Books, 2019.
Ulricchio, William. “The Algorithmic Turn: Photosynth, Augmented Reality and the State of the Image.” Visual Studies 26, no. 1 (March 2011): 25–35.
Zylinska, Joanna. AI Art. London: Open Humanities Press, 2020.