In Deconstructing Representation, I experimented with training a generative adversarial network (GAN) on a dataset composed of every image I had saved in Instagram. The goal of the project was to see what a machine learning algorithm would produce from a disorderly collection of images I find interesting, hoping for the results to be something like extracting a visual style from an artist’s mood board. Deconstructing Representation documents the process of training the model, cataloguing the input and output images in a large-scale print. Co-opting the sculptural form of the photography studio backdrop, the work plays with how structures inherent to the production of images ultimately influence how they are received.
This work was produced within the context of my PhD research, Machine Learning and Notions of the Image (2017–2020).