Professor Kate Crawford is one of the world’s leading scholars on the social and political implications of artificial intelligence. A research professor and a honorary professor and visiting chair of numerous international institutions such as USC Annenberg, University of Sydney, the École Normale Supérieure, Paris, she is also a senior principal researcher at Microsoft Research, New York. Over her 20-year research career, she has produced groundbreaking creative collaborations and visual investigations.
Projects such as Anatomy of an AI System with Vladan Joler won the Beazley Design of the Year award and was acquired for the permanent collections of MoMA, New York, and the V&A, London. Her collaboration with the artist Trevor Paglen produced Training Humans, the first major exhibition of the images used to train AI systems. Their project Excavating AI won the Ayrton Prize from the British Society for the History of Science.
Crawford’s latest book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press) has been described as ‘trenchant’ by the New York Review of Books, ‘a fascinating history of data’ by the New Yorker, a ‘timely and urgent contribution’ by Science, and named one of the best books on technology in 2021 by the Financial Times.
Crawford sits down with internationally recognised AI artist Sam Leach to discuss his contributions to the field ahead of his solo exhibition at Sullivan+Strumpf.
Sam Leach (SL): Kate Crawford, thank you very much for taking the time to talk about machine learning (ML), my work, artificial intelligence (AI), and the rapidly advancing technological Armageddon. The scope of machine learning applications is growing extremely quickly, how far do you see this extending, how is it likely to shape society and what role can artists play in that?
Kate Crawford (KC): Well, they’re a lot of very big questions! Let’s start from the top. Machine learning has really suffused into almost every aspect of everyday life. It’s in education, it’s in healthcare, it’s in hiring, it’s in criminal justice, it’s in policing, it’s in the military. It really has become an all-purpose tool. It’s the hammer for which everything is being turned into a nail. It is already well and truly a very active participant in shaping our social worlds in terms of how we see ourselves, how we see other people, how we understand the world in front of us, everything from autonomous vehicles to smartphones, to the sensors that track us as we move through city streets or when we go in and out of airports. All these applications are not just extracting enormous amounts of data about us, they’re also making predictions about how we will live, what we will buy, whether we’ll be a good employee, whether or not we deserve to get bail.
There are a small handful of tech companies that run these systems at scale, I tend to think of them as the great houses of AI. There are fewer than 10, depending on how you count, around the world. And in terms of the ones that actually control and own the backbone, which is poorly named ‘The Cloud’—because of course it is not at all spectral and abstract and floaty, it is in fact a profoundly material and energy consuming structure—there are really only four companies that own those structures. It is one of the most concentrated industries that we’ve had in the history of the world.
We see billionaires quietly buying up everything from how we’ll be accessing communication networks like Twitter, through to how we get shopping, through to how we get things delivered to us. So how do we get more people having a say? How are we really able to push back to say these are the places where AI is useful, and those are the places where it’s failing us and should not be used?
Artists have an extraordinarily important role to play here. One of the great privileges of my job is that I’ve collaborated with multiple artists around the world, like Trevor Paglen, Vladan Joler and Hito Steyerl. These artists ask questions about power and technology and, in doing so, they make these questions available to a much bigger audience than I normally would in academic corridors.
SL: Given artists do play a role in widening the audience and bringing to light some of these hidden processes, I’m interested to get your thoughts about how a physical medium like painting can say something relevant about machine learning.
KC: I think the way that you’ve been engaging with the tools of machine learning is really at the layer of how images are generated and how they are understood through the machinic gaze, if you will. In giving us the ability to see that, I think it does a couple of really powerful things:
One, it allows us to see the kind of distortions or ameliorations, an alien vision that is constructed at the level of a neural net. It shows the ways in which you can get things horribly wrong. What you see is how many sorts of stereotypes emerge, how many kinds of clichés around the visual there are, the deep Americanization of vision itself, and the profound cultural capture and appropriation of these tools. I think painting allows an intervention at that level to see that alien nature, and to start to question how meaning is being made.
The second thing that I think is really important for painting, and for art more generally, is to give us a way to think of the world differently, to open up an imaginative space, to think about how could AI be otherwise? In what other ways could we use these tools creatively, politically, to actually unseat forms of undemocratic power? I see it as a profound space of utopian potential. These tools can do different things than simply be used for targeted advertising, hiring and targeted attacks; there is this other way. In that sense, certainly, art can be very powerful. I’m curious about your experience, Sam, in terms of how you’ve been working with this data and models. What have you been discovering?
SL: Using the large pre-trained models (that are trained using the whole of the internet) they tell you something about internet users and how they understand images. With these models, I’m seeing images generated that remind me of oil spills and polar bears, these avatars of climate change. There’s a certain mood or theme portrayed that I’m not directly putting into the models, but they seem to be pulling these things together regardless. It is also interesting to see it’s really only a few lines of code that generates these results and the rest is just the data that fed into it. What I’ve learned is that the actual mechanism that’s driving these models isn’t that complicated, but rather the data and scale that’s generating such surprising conclusions.
KC: I think what you say is so deeply resonant with my experience, so that people assume these systems are highly sophisticated and doing profoundly impactful things. I’ve used this term with a scholar, Alex Campolo. He and I wrote a paper on the concept of ‘enchanted determinism’, the idea that these systems are enchanted, they’re almost magical and superhuman in their power. They’re deterministic and they’re somehow able to give highly accurate predictions around what should be done or how we should live. But, in fact, the opposite is true. These are systems that are profoundly simple, basic, and, in many cases, just wrong. They are the opposite of neutral or objective, they’re profoundly political and skewed.
I love the images that you produce, the recurring polar bear and these images of almost environmental collapse are haunting. I think about them almost as the ghosts in the AI machine. We’re seeing the legacy of the true planetary costs of these systems. This is a core, most-overlooked issue when it comes to AI, it is leaving such a toxic environmental legacy. In your work, I think all of those kinds of deeper truths are actually evident in the imagery, which is just so interesting to me.
Working with Generative Adversarial Networks (GANs), there’s a particular type of look that you get, a melted plastic, or weird underworld feeling where it looks like a slightly nightmarish Daliesque vision of the world. I don’t think it’s a bad thing to see that visually and to imagine ‘if that’s what it looks like aesthetically, what does that look and feel like politically? What does that do linguistically? What does that do in all of the other different ways that these systems are being asked to interpret and intervene in our world?’
I think this is why artists have actually got a really extraordinarily powerful set of tools to play with, not just to create these images, but to use it as a way to give sense to how these systems themselves are distortionary, how they create distortions by their very nature. That is one of the most important things for people to really grasp. There are many different ways to do it, but I feel like art is one of the most arresting and immediate ways where people can see what that strange, nightmarish landscape can look like and how it might be actually affecting them in the many different ways in which AI is already playing a role in their lives.
SL: Absolutely. I feel like that’s actually a great place to wrap it up. Thank you again for taking the time to talk.
KC: It’s been such a pleasure, Sam, really. Your work is so beautiful and it’s so important to be doing this kind of work at this time and to be raising these questions.