Excavating the Image

“AI is neither artificial nor intelligent. There is an enormous environmental footprint – the minerals, the energy, the water – that drives AI. This is the opposite of artificiality. It’s profound materiality.” Author and scholar Kate Crawford is best known for her work addressing the tangible realities of AI, from its impact on the environment to workers’ rights. In 2021, she published her extensive research on this topic in Atlas of AI: Power, Politics, and the Planetory Costs of Artificial Intelligence. Today, we are living in a world where AI is more publicly accessible than ever before, as AI-generated images flood the internet and it becomes a tool across sectors. In recent years, we’ve also seen artists explore and interrogate this burgeoning technology. Refik Anadol, Sougwen Chung and Trung Bao might come to mind. This October, London-based photographer Felicity Hammond (b. 1988) shares her contributions to these prevalent artistic and cultural conversations. Variations is an evolving installation that explores the relationship between geological and data mining, as well as image-making and machine-learning. Unveiling at Photoworks Weekender in Brighton, the project will then travel across the UK to Derby, London and Edinburgh. It has been commissioned through the Ampersand/Photoworks Fellowship, a unique biennial opportunity that supports a mid-career artist to create and exhibit a new body of work. We interviewed Hammond to learn more about this fascinating series. Read on to uncover the key concept of “model collapse”, the artist’s image creation process and her exciting plans for future exhibitions.

A: What sparked the idea for this project?

FH: I have spent many years exploring the power of photography, focusing on images that use the language of the medium without necessarily being made with a camera. For example, my last project took a deep dive into the world of computer generated architectural propositions that we find plastered on billboards and site hoardings. At first glance, they appear real – but of course they are not. They are computer generated speculations that mimic photographs in order to sell a building that doesn’t yet exist. Images like these are powerful and demand attention, and I felt that they needed to be critiqued through the lens.

As that project was coming to an end, I began to notice how AI generated images were becoming more commonplace. Renderings created using machine-learning were being used across a range of contexts beyond architectural representation. I wanted to critique this new technology – which is really only novel in terms of its accessibility to the average computer user – and bring attention to the ways in which machine-learning based programmes rely on an ecosystem that extends far beyond the interfaces with which its users are presented. I wanted to reflect on the entanglement of this process with the politics of surveillance, data capturing, polluting processes and the exploitation of land, resources and labour. I saw an opportunity to point the camera lens back towards the technologies that mimic the photographic, to find a place for the camera in this new form of image making, as a way to reveal its problematic infrastructure, beyond the images they produce.

A: The installation is set to be unveiled at festivals and galleries across the UK. Why did you choose to share this project as a gradually evolving exhibition rather than a single show? 
FH: I decided to explore the concept of variations in the way I shared this exhibition. Rather than making this project as a single show that could be toured, I referred to the image generation platforms that utilise machine learning in their outputs. When using text to image software, we are offered four output variations from which to choose. The curatorial process in this project mimics this. Each public output is a variation on the others that accompany it, shifting its focus towards a distinct aspect of the complex landscape of machine learning. V1: Content Aware begins by reflecting on the global infrastructures that supports the digital economy, using a shipping container to comment on the contents that are usually hidden; V2: Rigged brings together the extractive processes that exploit the land, humans and depleting resources; V3: Model Collapse asks us to consider what happens when the data produced by machine learning programmes is inevitably fed back into its own system, thus polluting its own sense of reality; V4: Repository, highlights the role of the data storage centre as a site of transmission and as an extractive and polluting process in its own right. Each variation uses a similar visual language, but presents a different view on the subject as a way to map its expansive territory.

A: Can you describe your process creating the images across these installations?
FH:
The project begins with a shipping container wrapped in a photographic collage. It lands in Jubilee Square in Brighton because I was drawn to the coastal proximity of this location. Next, it is fit with a security camera, which monitors the square and its audience. The storage unit does not just display images, but also captures them, extracting image data from a public site and then gathering material for the variations that follow. Alongside this method of generating image data, each exhibition will be photographically documented. These shots will form the basis of a “training set” for the following show. This process will continue for all of the exhibition venues, where the re-staging of each work mimics that of the constantly evolving data sets that inform machine learning platforms. The resulting exhibitions will shine a light on the processes and power dynamics at play in this new era of photography. Here, I’m reflecting on the constant extraction of personal data, the scraping of images and aggressive surveillance techniques. By mapping the material relationship between global sites of mineral extraction, the computational landscapes of data mining, the photographic studio and the site of exhibition making, the project interrogates a process that I think is quickly becoming a dominant way of seeing. 

A: One key aspect of Variations is the connection between image-making and machine learning. Could you share with us your reflections on this relationship?
FH:
Machine learning-based generators learn from existing images. The more photographs made and fed into these programmes, the more data they have for informing further results. I found this cycle intriguing. There is potential for the dataset to be contaminated by AI generated work. This is a possibility I explore in the third variation of the project, Model Collapse. Secondly, there’s the reality that the output of these programmes aren’t as random as they might seem. German writer and filmmaker Hito Steyerl (b. 1966) talks about this idea in her essay, Mean Images. Here, mean refers to both to the average mediocrity of products generated using machine learning tools, which relies on what came before, and also to their exploitative processes.

Whilst making Variations, I wanted to find a way to enact the aggressive processes that enable machine learning programmes to operate. I explore this idea in the second Variation of the project, titled V2: Rigged. Rigs are structures that support both mining and photographic tools. In this project, I use such apparatus to bring these processes together. Part camera, part drill head, part processing plant, the machine at the centre of the installation enacts the various violent acts of taking that are at the centre of computational image-making. When rigged, it is implied that the machine is capable of manipulating results, or that the outcome has been pre-determined, an idea that I think lies at the centre of this new form of image making. 

A: There’s also the relationship between geological mining and data mining. What are some of the most impactful insights you’ve gained from your research?
FH: As the second part of the Rigged project implies, mining and extraction are tied up in the processes that enable machine learning tools to operate. The extractive processes that are fundamental to the growth of machine learning tools (and of course other digital technologies that expand across many sectors), begins beneath the surface. For example, rare minerals like lithium and cobalt are mined for the production of hardware. Coal and water are needed to sustain the enormous data centres that keep generative AI tools running. There is plenty of research out there about the ecological footprint of our digital sector. For example, scholar Kate Crawford (b. 1974) brings some of this together really brilliantly in her book Atlas of AI (2021). My project interrogates and re-stage what mining might look like in relation to new forms of image making. I am attempting to find a visual language that unearths the hidden infrastructure of the mine, from the extraction of global resources to the extraction of personal data. The drill head/camera/machine hybrid that I am making feels like a good start in bringing these processes together. Both elements create and refer to something quite monstrous. 

A: What are your plans for the next exhibition, which is set to be created from the photographic records of the first iteration of Variations?
FH:
The second Variation of the project will take place at QUAD in Derby. The machine that I just described will be set against a pixelated backdrop of photographic material made using imagery gathered from V1: Content Aware. The image is emerging in the process of being formed. The machine confronts an image of itself against this backdrop, by being placed in front of a mirrored wall. At the centre of the machine is a camera lens, which (on a programmed shutter release) will take photographs throughout the period of the exhibition. The machine gathers more image data, building on its existing set – images of itself, images of the audience viewing the work and images of the audience viewing themselves. The act of looking and taking is central to the machine’s operation. This data will – in part – inform the show that follows, V3: Model Collapse

A: Your conceptual renderings of V3 – Model Collapse offer the viewer an abstract composition loaded with scrunched foil, reflective surfaces and pixelated sections. Could you tell us more about this piece? 
FH:
Mimicry is an important method in my work, and the image that I have made to illustrate V3 – Model Collapse serves as an imitation because the actual installation does not exist yet. I want to bring the minerals that have been unearthed after billions of years into contact with the technologies that they power. But, at the same time, I am developing processes that hold a mirror up to the way that machine learning models operate. As such, I wanted to use materials within my collages that imitate – just as images made using machine learning tools are imitations of photographs – with an indexical reference. In the collage that represents the third variation, two versions of the same image confront one another. One is a photograph taken in my studio; the other is an AI generated version. They bleed into each other, pixels transferring from the surface of the print into the materials being pictured. Mirrors and reflections distort where the referent lies.

The phenomenon of “model collapse” arises when a programme is trained on data generated using machine learning tools. It results in a degenerative learning process caused by a feedback loop of data produced by machine learning models being fed back into its own system as data, thus polluting its own sense of reality. V3: Model Collapse explores this hallucinatory process by feeding the data produced from the previous variations of the project into machine learning programmes to produce further variations on the imagery. Taking this further, the images that will be displayed in the third variation will actually be photographs that re-stage these AI generated images, returning back to the camera. The exhibition space – in this variation being The Photographer’s Gallery – becomes a site in which these images are both processed and displayed.

A: What would you like audiences to take away after engaging with Variations?
FH:
I think the most frustrating thing about this project for an audience is that the majority of people that engage with it will never experience it in its entirety. And this is completely intentional: I am interrogating and mimicking a process where so much is obscured. This project is about accepting the gaps, and acknowledging that there is a hidden infrastructure behind the images that we engage with on a daily basis. I find that a lot of art making has a deliberate immediacy – often this is to do with the way it is consumed digitally and how it works as an image or document. This project intentionally pushes back at this idea. Here, the work unfolds as an evolutionary process. 

A: Can you share with us other pieces you are currently working on? What else can we look forward to seeing from you? 
FH: Alongside this project I am working on an installation for an exhibition at Fondazione Mast in Bologna, Italy. This project is called Autonomous Bodies and here I am looking at AI-driven processes. However, this project specifically addresses car manufacturing. Here, I address how contemporary electric car production embraces industry 4.0 principles, in contrast with the automotive as a symbol of twentieth century industrialisation. The ideas come together through a new installation which includes a full size flattened car panel framing a collage work, and some new hydro printed sculptures which take alloy wheels and turn them into extractive tools. This one opens in January 2025 and I’m excited to share it soon!


Photoworks, Felicity Hammond: Variations | Opens 24 October

photoworks.org.uk

V1: Content Aware (Brighton, Dreamy Place at Jubilee Square) 24-27 October 2024 

V2: Rigged (Derby, QUAD as part of FORMAT International Photography Festival) 13 March – 15 June 2025 

V3: Model Collapse (London, The Photographers Gallery) 27 June –  21 September 2025 

V4: Repository (Edinburgh, Stills Centre for Photography) 6 November 2025 – 7 February 2026

Words: Felicity Hammond and Diana Bestwish Tetteh


Image Credits:

  1. © Felicity Hammond, V3 – Model collapse.
  2. © Felicity Hammond, V1 – Content aware.
  3. © Felicity Hammond, V3 – Model collapse.
  4. © Felicity Hammond, V2 – Rigged PWannual.
  5. © Felicity Hammond, V4 – Repository PWannual.