AI-generated ‘poverty porn’ fake images used by aid agencies | Global development

AI-generated images of extreme poverty, children and survivors of sexual violence are flooding image websites and are increasingly being used by leading health NGOs, according to global health professionals who have expressed concern about a new era of “poverty porn”.

“Everywhere, people are using them,” said Noah Arnold, who works at Fairpicture, a Switzerland-based organization focused on promoting ethical images in global development. “Some are actively using AI imagery, and others, we know, are at least experimenting.”

Arseny Alenichev, researcher “The images replicate the visual grammar of poverty – children with empty plates, cracked floors, stereotypical visuals,” said the researcher at the Institute of Tropical Medicine in Antwerp, who studies global health image production.

Alenichev has collected more than 100 AI-generated images of extreme poverty that are used by individuals or NGOs as part of social media campaigns against hunger or sexual violence. The photos he shared with The Guardian show exaggerated and stereotypical scenes: children huddled together in muddy water; African girl wearing a wedding dress with a tear staining her cheek. In a Commentary piece published Thursday In The Lancet Global Health, he said these images amounted to “poverty porn 2.0.”

While it’s difficult to measure the prevalence of AI-generated images, Alenichev and others say their use is on the rise, driven by concerns about consent and cost. Arnold said cuts in US funding for NGO budgets have made matters worse.

“It’s quite clear that many organizations are starting to think about synthetic images instead of real photography, because it’s cheap and you don’t need to get consent and everything,” Alenichev said.

AI-generated images of extreme poverty are now appearing by the dozens on popular stock photo sites, including… Adobe Stock images and FreebecIn response to queries such as “poverty”. Many of them bear captions such as “Realistic photo of a child in a refugee camp”; “Asian children swim in a river full of garbage”; and “A white Caucasian volunteer provides medical consultation to young black children in an African village.” Adobe sells licenses to the last two images on that list for around £60.

“They are very racist. They should not even allow these things to be published because they resemble the worst stereotypes about Africa, India, you name it,” Alenichev said.

Freepik CEO Joaquin Abella said the responsibility for the use of such extreme images lies with media consumers, not platforms like his. AI stock images are generated by the platform’s global user community, who can receive a licensing fee when Freepik customers choose to purchase their images, he said.

Freepik tried to limit the biases it found in other parts of its photo library, he said, by “introducing diversity” and trying to ensure gender balance in photos of lawyers and executives hosted on the site.

But he said his platform had only so much to do. “It’s like trying to drain the ocean. We’re making an effort, but in reality, if customers around the world want images a certain way, there’s absolutely nothing anyone can do.”

Screenshot showing AI-generated “poverty” images on a stock photo website. Such images have raised concerns about biased images and stereotypes. Illustration: Freepik

In the past, leading charities have used AI-generated images as part of their global health communications strategies. In 2023, the Dutch arm of the UK charity Plan International issued a report Video campaign against child marriage It contains AI-generated images of a girl with a black eye, an older man, and a pregnant teenager.

last year, The United Nations published a video on YouTube with an AI-generated “re-enactment” of sexual violence in conflict, which included AI-generated testimony from a Burundian woman describing being raped by three men and left for dead in 1993 during the country’s civil war. The video was removed after The Guardian contacted the United Nations for comment.

A spokesperson for UN peacekeepers said: “The video in question, which was produced more than a year ago using a rapidly developing tool, has been removed because we believe it shows inappropriate use of artificial intelligence, may pose risks in terms of information integrity, and mixes real footage and artificially generated near-real content.”

“The United Nations remains steadfast in its commitment to supporting victims of conflict-related sexual violence, including through innovation and creative advocacy.”

Arnold said the increased use of these AI images comes after years of debate in the sector about ethical images and dignified telling of stories about poverty and violence. “It’s supposed to be easier to take off-the-shelf AI images that come without consent, because they’re not real people.”

NGO communications advisor Kate Cardol said she was frightened by the images, and recalled previous discussions about the use of “poverty porn” in the sector.

“It saddens me that the struggle for more ethical representation of people experiencing poverty now extends beyond what is unrealistic,” she said.

Generative AI tools have long been shown to replicate – and sometimes exaggerate – broader societal biases. The spread of biased images in global health communications could make the problem worse, Alenichev said, because the images could leak into the wider internet and be used to train the next generation of artificial intelligence models, a process developed Appears to amplify bias.

A spokesperson for Plan International said the NGO had, as of this year, “adopted guidelines advising against the use of artificial intelligence to photograph individual children,” and said the 2023 campaign used AI-generated images to protect “the privacy and dignity of real girls.”

Adobe declined to comment.

Leave a Comment