Siri, play me something I would like...” asks FKA Twigs in Spike Jonze’s 2018 advertisement ‘Welcome Home’ for Apple HomePod. To the opening notes of Anderson. Paak’s song Til It’s Over, she dances through her living room as it morphs around her in a technicolour dream, triggered by Siri’s recommendation. Implicit in the ad is Twigs' intimate and emotional relationship with Siri. By aestheticizing emerging technologies in this way an alchemy-like quality is cast onto AI, obscuring technical applications with imagined emotional qualities. In reality, we are far away from having a true emotional connection with the intelligence we are training, and these contemporary mythologies veil the intricate, complex and violent processes that are contained in the black boxes of AI systems. How can we truly understand the impact of AI’s accelerating intervention into our lives? As humans hand more and more decision-making powers to systems embedded with AI, are we misplacing our efforts by allowing it to permeate our emotional experience?

For Kate Crawford, co-founder of the AI Now Institute situated at New York University, “artificial intelligence systems will always see with the eyes of the master.” Speaking with her during the opening of Training Humans, an exhibition she conceived and curated with artist Trevor Paglen at the Milan Osservatorio (one of the venues of the Prada Foundation), Crawford proposes that debates on AI should centre on the forces of power behind the technology, rather than pursuing technical advances. An ethical approach to AI is “not about perfecting algorithms, or making them fairer, it’s about how to question the actual power that drives the way they see.” Through an interdisciplinary approach, the AI Now Institute situates itself at the forefront of the AI and ethics debate. Crawford and co-founder Meredith Whittaker, who is also the founder of Google’s Open Research Group, bridge academic research with insights into industry to investigate the current impact of AI on our daily lives. Crawford’s work is developed as a three-part strategy, using research to investigate current AI systems deployed in outputs such as predictive policy in the United States; policy advice to de ne governmental regulatory strategies for AI; and by programming public events, symposiums and collaborating on visual strategies with contemporary artists to provide public debate.

As AI is increasingly being introduced into our daily lives at an accelerated rate, Crawford argues that its overwhelming influence on the future of civil society necessitates an urgent excavation of the many backends and processes behind it. The arts as research and critical practice involve useful strategies to disassociate technologies from the routine of daily lives, mirroring them back to us in strange, absurd or surreal ways. Whether through analytical visualisations, or enabling people to perform the systems at play, her collaborations provide a critical examination of the current status quo.

Exhibition view, Training Humans, Photo: Marco Cappelletti / Courtesy Fondazione Prada, IMAGE-NET, Li Fei-Fei, Kai Li (detail), 2009


SUPERSYSTEMS

Published in 2018, Anatomy of an AI System is a visual research into the invisible infrastructure behind a product well known to the general public – Amazon Echo. Made in collaboration with SHAPE Foundation founder Vladan Joler, the info- graphic and long-form essay takes us on a non-linear journey through “interlaced chains of resource ex- traction, human labour and algorithmic processing across networks of mining, logistics, distribution, prediction and optimization”, that are triggered by our simple inter- actions with the device. Dissecting the process and mapping it out in a visual way confronts the planetary scale complexity of material resources, human labour and data and the way it crosses borders, time zones, and the Earth’s crust. It had a profound impact on Crawford. As she puts it: “I am used to thinking about the black box of an algorithm. But I didn't realise black boxes are also everywhere across the chain of production, especially in relation to mining for rare earth minerals. We were researching places where there is slave labour, where people will be shot if they try to find out how and who is running mines. Experiencing this study taught me something very profound about the hard limits of knowing, and of transparency.”

For many, owning an Amazon Echo is convenient. Small gestures of the everyday are replaced by a conversation of command/recommendation. But at what cost? “We are doing something so extraordinary by handing over so much decision-making power to these systems. We need a lot more debate about it,” Crawford explains. To quote Anatomy of an AI System: the Echo user is simultaneously a consumer, a resource, a worker, and a product.” Essentially every time you communicate with an Echo you contribute to the neural networks of Amazon’s infrastructure to sharpen the intelligence of Alexa. For Craw- ford, it is clear there is a consider- able divide between the complexity of the system itself and what people are using it for. “It’s a bad trade for convenience,” she says, “it's a false convenience, which will have long- term expenses and consequences.”

CRAP IN, CRAP OUT

Transparency is key if we want to discern the invisible powers that influence AI. In their exhibition Training Humans, Crawford and Paglen attempt to expose the ways artificial intelligence has been developed to see humans through the use of training datasets: photographs of anonymous and nameless faces, bodies, facial expressions, gait analysis, mugshots, finger-prints and biometrics. These data- sets, developed by educational, corporate and governmental institutions, train artificial intelligence to recognise, learn and categorise human conditions for outputs such as facial recognition.

The exhibition is an artistic intervention as investigation, shedding light on the ways researchers in the field of machine learning and AI de ne and categorise populations through attempting to identify the commonalities and normative features of a few. Following a historical axis, datasets are displayed from the late 1960s to today, with little change in the strategies for classification and categorisation of images. As Paglen describes in conversation with Crawford, the underlying assumption of these data- sets is that you can tell someone’s gender, age or emotional state just by looking at a still image of them. “What part of this is science, what part of this is history, what part of this is politics, what part of this is prejudice, what part is ideology?” Centralising and publicly displaying all these datasets together in one space creates a feedback loop which profoundly questions the appearance of emerging technologies in society. It brings to light the political complexity of using systems like facial recognition in private and public spheres, and reveals the institutions that generate these datasets. They trigger an urgent question: how does the training of an artificial intelligence reflect the biases of those who create it?

On display is JAFFE, short for Japanese Female Facial Expression, developed by researchers (Michael Lyons, Miyuki Kama- chi and Jiro Gyoba) at Kyushu University in 1997 to train machines in affective recognition – the detection of human emotion through facial expressions. The images show basic facial expressions of human emotions such as happiness, sadness, anger and surprise. According to Crawford, the dataset is clearly influenced by a much-maligned theory developed by American psychologist Paul Ekman, that all of humanity shares seven universal emotions expressed by the same facial movement. It seems absurd to imagine that all the complexity of the human condition can be communicated by a simple set of facial expressions, yet in 2019 (22 years after this study was published) many technological corporations are pumping huge amounts of research and funding into affective recognition using these principals.

For example, in a text currently on its website, Microsoft Azure claims to integrate “emotion recognition, returning the confidence across a set of emotions for each face in the image such as anger, contempt, dis- gust, fear, happiness, neutral, sad- ness, and surprise.”

SCRAPED IDENTITIES

By the early 2000s found images scraped from the internet and social media starting to appear in datasets. Perhaps the most startling dataset on display is the publicly available ImageNet, instrumental in the development of ‘deep learning’ – the mathematical technique that allows machines to ‘see’ images, including faces. Initially developed by Professor Fei-Fei Li at Princeton University in 2006-07, the dataset contains over 14 million images, with a section devoted to the categorisation of humans containing around 2000 categories. Labels like “bad person”, “call girl”, “drug addict”, “nymphomaniac”, “fucker”, “co- lour-blind person”, “adulterer”, and “fanatic”, are assigned manually by researchers in labs or workers in Amazon’s Mechanical Turk. As Paglen explains: “When you classify people, that line between a description of someone and a judgement of them, can get very blurry.” By extension, Crawford and Paglen even further expose the people un- wittingly used in these datasets by exhibiting them publicly in an arts institution, calling into question the level of privacy we have online and the rights we have over the distribution of our images.

The installation ImageNet Roulette allows people to have a lived experience of the dataset. When sitting in front of the interface, your face is scanned and then labelled by an AI model trained to use the images categorised on ImageNet. Many results communicate a sinister undertone that exposes a multitude of racial, sexist, ableist and homophobic biases. By performing within the labelling system, participants directly engage with the biases in it, challenging the status quo through embodied experience. Ironically, the intervention went viral on social media after going live online for a week during the opening of the exhibition, at one point generating around 100,000 images per hour. In a display of social amnesia, people swiftly took snapshot images of themselves within the work, and uploaded them to Instagram – the platform which would then use those selfies to sharpen algorithms for sales and advertising. After the intervention ImageNet scrubbed around 600,000 offensive images and labels relating to humans from the database. Yet the solution here is not a technical x, but examining whether we need these kinds of systems at all.

Exhibition view, Training Humans / photo: Marco Cappelletti / courtesy Fondazione Prada From le to right: SDUMLA-HMT, Yilong Yin, Lili Liu, Xiwei Sun, 2011, A FACIAL RECOGNITION PROJECT REORT, Woodrow Wilson Bledsoe, 1963


The aestheticisation of technological advancement within marketing, science fiction and the arts communicates AI as a mystical, mysterious entity. ImageNet Roulette gets right to the crux of a sometimes-forgotten fact – humans, with their conscious and subconscious cultural biases, are teaching machines how to see other humans. It is a well-known fact that the field of AI is dominated by white males. A study con- ducted last year by Wired revealed that only 12% of data researchers working on AI systems were women, and most of those people came from higher education or privileged backgrounds. The fact is, people whose work underpins the visions for emerging AI systems, rarely resemble society that their interventions are intended to transform.

LAG

Yet, still, artistic and research investigations lag behind the rate that multinational corporations are deploying technology. Beginning in December 2017, Orlando in the United States piloted Amazon’s commercially available facial recognition software Amazon Rekognition in public space, despite the technology still being under development. (The two-phase pilot programme was knocked on the head this summer after what the Orlando Weekly described as “15 months of technical lags, bandwidth issues and uncertainty over whether the controversial face-scanning technology actually works.”) In an opinion piece published in Nature, Crawford writes: “These tools generate many of the same biases as human law-enforcement officers, but with the false patina of technical neutrality.” This year cosmetic company Lancôme (one of the subsidiary brands of L’Oreal) developed a new custom machine using facial recognition and algorithms to find the perfect shade of foundation for consumers, by matching a scan of your skin tone to over 20,000 shades. Instagram uses algorithms to scan the 70 mil- lion images uploaded onto the plat- form every day, for business intel and insights into human behaviour which can be sold to third parties. In China, KFC has tested a service since 2017, allowing customers to pay for a meal using a facial detection payment system provided by Chinese tech giant Alibaba.

Irish company Everseen supplies the technology used by Walmart across 1000 stores to thwart would be thieves using facial recognition. The examples of AI being tested on live populations are dizzying, immense and woefully unregulated. So what can we do? “There is no way to deliver perfect transparency to a consumer,” Crawford explains. “Data scientists, designers, engineers and researchers have to ask: why am I making this system, who does it serve, and who might it harm? Simply asking those questions at the beginning of any process is so rarely done and would make a profound difference.” These questions are at the core of Crawford’s collaborations with artists. The arts are able to act as investigation laying the processes within AI systems bare for all to see. By decontextualising technologies already at play in daily life, and mirroring them back to us within the structures of arts institutional presentation a space is created for critical evaluation, if only for a moment before the next development sweeps us away.

This article appeared in DAM75. Order your personal copy.