Real Photos, Real Trust: What AI Imagery Means for Your Charity Website

New research from the University of East Anglia has found that AI-generated imagery is being widely used in charity campaigns, often reinforcing harmful stereotypes rather than challenging them. When audiences spot AI imagery, the conversation shifts away from your cause and towards the technology. Studies also show that knowing an image is AI-generated reduces empathy and donation intent. The takeaway for your website: invest in authentic photography or quality stock imagery, tell stories of agency rather than suffering, and if you do use AI, explain why. Your supporters are savvier than you think, and trust is everything.


Your website imagery says more about your organisation than you might think. It tells potential supporters who you are, what you value, and whether they can trust you. And right now, there’s a growing problem in the charity sector that’s putting that trust at risk.

New research from the University of East Anglia (UEA), published in their Artificial Authenticity report, has found that AI-generated imagery is being widely used across the sector to depict poverty, vulnerability, and humanitarian need. And rather than challenging the tired, harmful visual tropes that charities have been criticised for over decades, it’s reinforcing them.

If you run a charity, nonprofit, or mission-driven organisation, this has real implications for your website. Your site is often the first place a potential supporter encounters your work, and the imagery you use there shapes whether they stick around, trust you, and take action.

What the research found

The UEA team analysed 171 AI-generated images used by 17 voluntary organisations, from large international bodies like Amnesty International, the World Health Organization, and WWF, to smaller grassroots charities. Poverty was by far the most common theme, appearing in 51 of those images. Environmental imagery and human rights visuals were also common, but poverty dominated.

Crucially, these weren’t obviously stylised or illustrative images. Nearly 70% were photorealistic, designed to look like real documentary photography. The report found that AI was being used to recreate “harmful tropes associated with poverty, race and vulnerability,” rather than challenge them.

This follows a Guardian investigation in October 2025, which revealed that aid agencies and charities were increasingly turning to AI-generated images of extreme poverty, children, and survivors of sexual violence for their campaign materials.

It’s a pattern, and it’s growing.

Why this is a trust problem for the whole sector

Here’s the thing that should concern every charity leader reading this: the damage doesn’t stay contained to the organisations using these images. It spreads across the whole sector.

The UEA report warned that AI images of poverty could lead donors to give money based on fabricated evidence they believe to be real. If a supporter lands on your website, sees a photorealistic image of a child in crisis, feels moved to donate, and later discovers the image was entirely machine-generated, that’s not just a PR problem for one charity. It chips away at public confidence in all of you.

As the report stated: “The ethical implications extend beyond individual organisational practice to the sector’s collective credibility.”

A separate report from filmmaking agency The Saltways echoed this, warning that “one charity’s misuse becomes the sector’s problem, as media coverage rarely distinguishes between individual organisations.”

Co-author David Girling from UEA put it well: “Charities exist because people care about other people. The moment when audiences start questioning whether what they are seeing is real, the emotional connection that drives support is put at risk.”

Your supporters have chosen to trust you with their money and their belief in your mission. That trust is hard-won, and once it’s gone, it’s incredibly difficult to rebuild.

It can actually hurt your fundraising

This isn’t just an ethical concern. There’s growing evidence that AI imagery can directly harm fundraising performance.

The UEA report cites research showing that when donors are aware an image is AI-generated, it reduces their empathic response, their sense of guilt, and their perceived sadness, all of which lead to lower donation intentions. That effect persists even when charities are transparent about their AI use and explain their ethical reasons for doing so.

Here’s the kicker: in the same study, participants couldn’t actually tell the difference between AI photos and real ones just by looking at them. The images themselves weren’t the problem. The disclosure was. Once people knew the image was synthetic, the emotional connection dropped.

So even if the AI images on your website look flawless, the moment someone finds out they’re not real, you’ve lost something you can’t easily get back. That’s a real problem for your donation pages, your impact stories, and anywhere else on your site where you’re asking people to feel something and act on it.

Your audience notices (and the conversation shifts)

One of the most striking findings from the UEA research was about what actually happens when people encounter AI-generated charity imagery online. Spoiler: they don’t engage with your cause. They engage with the technology.

The researchers analysed over 470 public social media comments across six charity campaigns. Of those comments, 141 focused on AI ethics and authenticity concerns, 122 discussed the technical quality of the images, and only 80 (fewer than 20%) actually engaged with the humanitarian issue the charity was trying to highlight.

Think about that for a moment. If the imagery on your website or in your campaigns is generating more debate about whether the picture is real than about the people you’re trying to help, the tool is actively working against your mission.

The report also found that environmental organisations faced a particular kind of backlash. WWF Denmark was criticised for using energy-intensive AI tools to promote sustainability. Audiences called it out as hypocrisy, with some labelling the approach “ecocidal.” If your organisation’s values and your choice of tools don’t align, people will notice.

You’ve invested time, energy, and budget into building a website that communicates your mission clearly. The last thing you want is for that work to be undermined by questions about whether your images are real.

Labelling helps, but it’s not enough on its own

The UEA study found that more than one in ten AI-generated images in charity communications had no acknowledgement that they were synthetic. The report described this as a direct breach of transparency.

But here’s the nuance: even when images were clearly labelled as AI-generated (and 85% of them were), organisations still faced significant backlash. Disclosure on its own didn’t protect them. As the report put it, transparency is a necessary ethical baseline, but it’s not a resolution.

There is one thing that did seem to help, though. The research found that when charities explained why they were using AI, things landed differently. If an organisation could articulate a clear ethical reason, such as protecting vulnerable people from being photographed or filmed, audiences were more receptive. It’s not just about saying “this image is AI-generated.” It’s about saying “here’s why we made that choice.”

The safeguarding tension

It’s worth acknowledging that the picture isn’t entirely black and white. One of the more thought-provoking findings from the research is around safeguarding.

Some organisations use AI-generated imagery specifically to protect vulnerable people from being re-traumatised by the process of being photographed or filmed for campaign purposes. That’s a legitimate and compassionate reason to explore alternatives to traditional photography.

But the research found that donors often reject these synthetic images, prioritising their own need for an “authentic witness” over the beneficiary’s right to privacy. That’s a difficult tension, and it doesn’t have a neat answer.

What it does tell us is that if you’re using AI imagery for safeguarding reasons, communicating that context clearly on your website matters enormously. Don’t just label the image. Explain the thinking behind it. Your audience is more likely to understand and support a decision when they can see the care that went into making it.

What this means for your website

As someone who designs websites for charities and purpose-led organisations, I think about imagery constantly. The visuals on your site aren’t decoration. They’re doing heavy lifting: building trust, communicating your values, and guiding visitors towards taking action. If those images feel inauthentic, that whole journey falls apart.

Here are a few things worth considering for your website in light of this research:

Audit your website imagery. Start with your homepage, your about page, and any key landing pages. Are the images authentic? Do they represent the people you work with honestly and with dignity? If you’re relying on AI-generated visuals to fill gaps, it’s worth asking whether those images are helping or hindering the experience for your visitors.

Invest in real photography where you can. I know budgets are tight. But even a small set of authentic, well-shot photos of your team, your work, and the communities you serve will transform your website. They give visitors something real to connect with, and that connection is what turns a casual browser into a genuine supporter. It’s also worth remembering that when charities switch to AI, professional photographers and filmmakers, particularly those in the Global South, lose work. Choosing to commission real photography isn’t just better for your website. It supports the livelihoods of the people telling these stories.

Use quality stock imagery if bespoke photography isn’t in the budget. There are some great stock libraries out there with diverse, respectful, and realistic imagery. It takes a bit more time to find the right photos, but a thoughtfully curated set of stock images on your website is a far better option than generating fake scenes with AI. The key is to look for images that feel natural and human, and that sit comfortably alongside the rest of your site’s visual identity.

Tell stories of agency, not just suffering. The UEA research suggested that the dominance of poverty imagery in AI-generated content points to a persistent belief that showing suffering is essential for engagement. But decades of sector experience tell us otherwise. Supporters respond to hope, progress, and impact. Show people what change looks like on your website, not just what the problem looks like.

If you use AI, don’t just label it. Explain it. As this research shows, a simple “AI-generated image” caption isn’t enough on its own. If you’ve made a deliberate, ethical decision to use AI imagery, whether for safeguarding, accessibility, or another good reason, say so. A brief note explaining your reasoning will go much further than a label alone.

If you do use AI, involve communities in the process. One of the UEA report’s key recommendations is that if charities choose to use AI-generated imagery, they should co-create it with local communities: involving them in generating prompts, reviewing outputs, and approving final images. That way, the people being represented have a say in how they’re portrayed. It’s a much more respectful approach, and it shows on the final result.

Think about your website’s visual identity as a whole. Your homepage, your impact pages, your donation journey, your team page: they all need to tell a consistent, authentic story. Visitors notice when imagery feels disjointed or generic. Make sure the visual thread running through your site is something you’d be proud to stand behind if someone asked about it.

The bottom line

AI is a powerful tool, and it’s not going anywhere. As report co-author Deborah Adesina noted, “The future of charity storytelling will not hinge on technological capability alone. It will depend on whether organisations can maintain legitimacy, transparency and moral coherence in an environment where audiences are increasingly media literate and increasingly sceptical.”

The organisations that come out of this well will be the ones whose websites reflect who they really are: real photography, honest storytelling, and a visual identity that treats supporters as thoughtful people who can tell the difference. That’s always been good web design practice. This research just makes it more urgent.


If you want to bounce some ideas around for finding images for your website, or need help sourcing authentic stock imagery, drop me a message. I’d love to help you get it right.

Next
Next

Get Your Website Ready for the Holiday Season (Without the Stress)