It’s quite easy to think of artificial intelligence in the realm of dystopian science fiction, as something which may appear on the horizon in the form of eerily familiar life forms. Yet unbeknownst to some, artificial intelligence has already slipped into our daily lives in many nascent ways.

In their exhibition ‘Training Humans’ at Fondazione Osservatorio in Milan, researcher and AI Now Institute founder Kate Crawford and visual artist Trevor Paglen, expose the ways artificial intelligences have been developed to see humans through the use of training datasets: photographs of anonymous and nameless faces, bodies, facial expressions, gait analysis, mugshots, fingerprints and biometrics. These datasets, developed by educational, corporate and governmental institutions, train artificial intelligence to recognise, learn and categorize human condition for outputs such as facial recognition. Never intended for human consumption, the photographs exhibited investigate the ways researchers in the field of machine learning and AI, define and categorise populations through attempting to identify the commonalities and normative features of a few. Displayed out of context they trigger an urgent question: how does the training of an artificial intelligence reflect the biases of those who create it?

Screen : Labeled Faces In The Wild, Gary B. Huang, Manu Ramesh, Tamara Berg, Erik Learned-Miller, 2007. Wall: Cross-Age Celebrity Dataset (CACD) Bor-Chun Chen, Chu-Song Chen, Winston H. Hsu, 2014. Courtesy Fondazione Prada.


A series of faces from the FERET Dataset, funded by the United States Military from 1993-1996 stare obtrusively out from a wall on the first floor. The photos are almost like portraiture, a series of anonymous people carefully and stylistically posed for the camera, “reminiscent of anthropometric experiments conducted in the late 19th and 20th centuries,” Crawford mentions. The exhibition follows a historical axis, displaying datasets from the late 1960’s to today, with little change in the strategies for classification and categorization of images. As Crawford also mentions in a conversation with Paglan, “the functional photography of the past is now training the systems of the future.”

(Left) Sdumla-Hmt, Yilong Yin, Lili Liu, Xiwei Sun, 2011 (Right) A Facial Recognition Project Report, Woodrow Wildon Bledsoe, 1963. Courtesy Fondazione Prada.


Researchers at the University of Tennessee, Knoxville created this dataset of 20,000 faces to classify people by race, gender, and age. According to the dataset, gender is binary and race can be represented by the categories White, Black, Asian, Indian, and Others.

CAISA Gate and Cumulative Foot Pressure, 2001Shuai Zheng, Kaigi Huang, Tieniu Tan and Dacheng TaoCreated at the Center for Biometrics and Security Research at the Chinese Academy of Sciences, the dataset is designed for research into recognizing people by the signature of their gait. Courtesy Fondazione Prada.
CAISA Gate and Cumulative Foot Pressure, 2001Shuai Zheng, Kaigi Huang, Tieniu Tan and Dacheng TaoCreated at the Center for Biometrics and Security Research at the Chinese Academy of Sciences, the dataset is designed for research into recognizing people by the signature of their gait. Courtesy Fondazione Prada.
When confronted by a found image set like University of Tennessee’s UTK Face developed in 2017, which categorises the faces of people by binary genders and simplistic ideas of race alongside assumptions of age, it is hard not to recall the same techniques used in violent colonialist regimes of the 20th century. That these same tactics are now implicit in the development of emerging technology tells us that artificial intelligence and how it is used stretches far beyond technical pursuit, and into the realms of the political and social.

The timeline also creates a simultaneous narrative for how the tactics and procedures for creating datasets have changed from staged facial expressions to images harvested, in many cases without permission, from social media platforms and the internet. By extension, Crawford and Paglan even further expose the people used in these datasets by exhibiting them publicly in an arts institution, calling into question the level of privacy we have online and the rights we have over the distribution of our images. The questions provoked by this exhibition are urgent yet lag behind the extremely fast rate technology is already being live tested on populations.

Centralizing and publicly displaying all these datasets together in one space, creates a feedback loop which profoundly questions the appearance of emerging technologies in society. It brings to light the political complexity of using systems like facial recognition in public space, the home and in policing to name a few and reveals the institutions that generate these datasets. Perhaps most startling, it appears that those of us who post images of ourselves and others to social media channels are implicit in the training and development of these systems, so intangibly threaded through our lives already. How could it be, that by the very action of photographing a loved one, we are implicit in a system already at play that aims to objectify, itemise and categorize the human condition?

‘Training Humans’ by Kate Crawford and Trevor Paglan is on at Fondazione Osservatorio in Milan, Italy until the 24 February 2020. An article on Kate Crawford further discussing artificial intelligence and its social impact will feature in the upcoming issue of Damn magazine, published in October.

CAISA Gate and Cumulative Foot Pressure, 2001. Shuai Zheng, Kaigi Huang, Tieniu Tan and Dacheng TaoCreated at the Center for Biometrics and Security Research at the Chinese Academy of Sciences, the dataset is designed for research into recognizing people by the signature of their gait. Courtesy Fondazione Prada.
Kate Crawford | Trevor Paglan. Photo by Marco Cappelletti. Courtesy Fondazione Prada.
Installation view: Cohn-Kanade Au-Coded Expression Database. Takeo Kanade, Jeffrey F. Cohn, Yung-Li Tian, 2000. Courtesy Fondazione Prada.
The Japanese Female Facial Expression (JAFFE) Database. Michael J.Lyons, Shigeru Akamatsu, Miyuki Kamachi, Jiro Gyoba, 1997. (Detail). Courtesy Fondazione Prada.
Installation view. Image-net, Li Fei-Fei, Kai Li, 2009. Courtesy Fondazione Prada.
Image-net, Li Fei-Fei, Kai Li, 2009 (detail) Courtesy Fondazione Prada.
Image-net, Li Fei-Fei, Kai Li, 2009 (detail) Courtesy Fondazione Prada.
Exhibition view of Kate Crawford | Trevor Paglen : Training Humans" Osservatorio Fondazione Prada. Photo by Marco Cappelletti. Courtesy Fondazione Prada.