Human Rights
5
 min read

AI Abuse in Schools

AI abuse in schools is now a safeguarding issue. Pupil photographs can still celebrate achievement, confidence and school life, but the risk has changed. Images shared on websites, newsletters and social media can now be copied, altered, sexualised or used in blackmail attempts. The answer is not to erase children from public life. The answer is better consent, safer image use, regular review and stronger safeguarding judgment.

Written by

Aneeta Prem

Published on

May 9, 2026

AI Abuse in Schools: Why Pupil Photos Need a New Safeguarding Standard

AI abuse in schools is not a reason to remove every image

Schools, charities and community organisations use real photographs for good reasons. Images can show children learning, young people leading, volunteers helping, patients being supported and communities coming together. They can build trust, show impact and give dignity to people whose stories deserve to be seen.

That must not be dismissed.

The issue is not whether images should ever be used. The issue is whether organisations now use them carefully enough for the AI age.

The UK Safer Internet Centre has briefed schools on image safety after concerns about AI blackmailers using pupil images from websites and social media. Its guidance focuses on managing children’s image security, not banning all photographs. It asks schools to think more carefully about how images are shared, protected and reviewed.

That is the right balance. Visibility matters. Safeguarding matters too.

Why pupil photos now carry a different risk

For years, schools have used photographs to celebrate achievement. A sports day picture. A class project. A student award. A school trip. Most of that use has been positive and well intentioned.

However, good intent is no longer enough.

AI tools have changed what can happen to an image after it leaves the organisation’s control. A clear photograph can be copied, manipulated and misused. A child’s name, uniform or school identity can add further risk. A public image can become material for humiliation, coercion or extortion.

That does not mean every image is unsafe. It means the risk assessment has changed.

The old question was:

Do we have permission to use this photograph?

The new question must be:

Is this image necessary, proportionate, consented to, reviewed and safe enough to publish in the current digital environment?

That is a higher standard, but it is not an impossible one.

Responsible image use is still possible

Responsible image use should not disappear from schools, charities or public-interest work. Children and young people should not become invisible because criminals abuse technology. Survivors, patients, volunteers and campaigners should not be erased from public life because risk exists.

The task is to reduce avoidable risk.

That means choosing images with care. It means avoiding unnecessary full names. It means checking whether a child’s school, location, routine or vulnerability is too visible. It means reviewing old galleries and removing images that no longer need to remain online. It also means giving parents, carers, children and adults a clear route to ask for an image to be removed.

Government guidance on taking and using photos and videos in schools recognises that schools may use photos and videos for many purposes, including educational activities, marketing, newsletters, social media, trips, performances and classroom displays. The guidance focuses on lawful basis, restrictions, responsible use, storage and transparency. It does not say schools must never use images.

That distinction is vital.

Consent forms need to catch up

Many organisations rely on image consent forms. Those forms still matter. However, some were written before AI misuse became a mainstream safeguarding concern.

A parent may have agreed to a child’s photograph appearing in a newsletter. A student may have agreed to an image being used to celebrate an award. A volunteer may have agreed to appear in a campaign photo. That does not mean they understood how easily an image could later be copied, scraped, altered or misused.

Consent should be clear, current and meaningful.

The Information Commissioner’s Office says data protection law is likely to apply when a school takes photos or videos for official use, such as promotional material, and pupils or parents must be told how the images will be used.

In practical terms, organisations should ask:

Does the consent wording explain where images may appear?

Does it explain social media use?

Does it say how long images may remain online?

Can consent be withdrawn?

Does the person understand whether their name will appear?

Does the organisation have a process for removing images?

Does the policy now mention AI manipulation, image misuse or sextortion?

These questions are not bureaucratic. They are part of modern safeguarding.

This matters to Freedom Charity’s education work

At Freedom Charity, we have always believed that education can protect children before harm becomes irreversible. Freedom’s safeguarding work includes school resources, professional awareness, the books But It’s Not Fair and Cut Flowers, and PSHE-accredited forced marriage and FGM lesson plans. Those resources exist because children need information, visibility and safe routes to help.

That is why this issue must be handled carefully. We should not respond to AI abuse by making children invisible. Silence has never protected children from abuse. However, public-facing images must now sit within a stronger safeguarding framework.

Schools and charities can still use images. They should do so with consent, dignity and review.

This is not just an online safety issue

The phrase “online safety” can sound too soft. AI abuse in schools can involve child protection, image-based abuse, sexual coercion, extortion, reputational harm and violence against women and girls.

The Internet Watch Foundation’s 2025 Annual Data and Insights Report recorded 451,210 reports assessed and 311,610 reports confirmed to contain or lead to child sexual abuse material. Its report also highlights the growing challenge of AI-generated child sexual abuse imagery.

A fake image can cause real harm. A manipulated image can create fear, shame and isolation. A child may worry that adults will blame them. A young person may be threatened with exposure. A family may not know where to turn.

The first response must be protection, not judgement.

Girls face a particular risk

AI abuse can affect boys, men, staff, volunteers and public figures. However, sexualised image abuse has a strong gendered dimension.

The Internet Watch Foundation has reported that girls remain disproportionately represented in child sexual abuse imagery, including AI-generated imagery.

This matters because shame is often used as a weapon against girls and women. A fake image can still be used to threaten someone’s education, reputation, family relationships or personal safety. The image may be artificial. The coercion is not.

Professionals must understand that distinction.

Schools and charities need proportionate image policies

A good image policy should not frighten organisations into silence. It should help them make better decisions.

Schools, charities and community organisations should review:

public website images

social media images

old event galleries

newsletters and campaign materials

fundraising stories

student or volunteer photographs

images of children in uniform

images linked to names, locations or routines

images of vulnerable people or sensitive services

photography supplier arrangements

withdrawal of consent processes

incident response plans

This is not about stripping websites of life. It is about using real people’s images with care, dignity and purpose.

What safer image use looks like

Safer image use does not have to be complicated.

Organisations can use group images where individuals are less identifiable. They can take photographs from behind, from a distance or from an angle that still shows activity without exposing a child’s face. They can avoid full names. They can use first names only where appropriate. They can blur faces where needed. They can remove old images after a set period. They can avoid linking images to sensitive personal stories unless the person has given informed and specific consent.

They can also choose strong non-identifying images: hands working, classrooms from behind, campaign materials, books, displays, feet on a sports field, artwork, event banners, backs of heads, symbolic objects or carefully cropped scenes.

The point is not to hide people. The point is to avoid unnecessary exposure.

The law is moving, but practice must move too

The UK Government has recognised that AI can be used to facilitate child sexual abuse, violence against women and girls, fraud and other crimes. Government material on policing also refers to offenders using AI-generated content, including deepfakes, to facilitate offences.

Section 138 of the Data (Use and Access) Act 2025 came into force on 6 February 2026. It created offences relating to creating, or requesting the creation of, a purported intimate image of an adult without consent or reasonable belief in consent.

For children, sexual images and pseudo-images raise serious child protection and criminal law issues. The Government has also set out proposals through the Crime and Policing Bill and related material to address AI models optimised to create child sexual abuse material.

However, law alone will not protect children. Organisations need practical policies, trained staff and clear escalation routes.

What organisations should do now

Every school, charity and community organisation should carry out an image audit.

That does not mean deleting everything overnight. It means checking what is online, why it is there and whether it still needs to be public.

The first priority should be images that are:

old

unnecessary

clearly identifiable

linked to full names

linked to uniforms or locations

linked to vulnerable children or adults

connected with sensitive services

used without clear records of consent

Organisations should then update image policies, consent wording and staff guidance. They should also make sure someone has responsibility for removing images quickly if concerns arise.

A good question for every image is:

Does this image still serve a clear purpose, and can we use it in a safer way?

What parents, carers and adults can ask

Parents and carers do not need to become technology experts. They can ask simple questions.

Where will this image appear?

Will my child’s name be used?

Will the image be shared on social media?

How long will it stay online?

Can I withdraw consent?

What happens if the image is copied or misused?

Does the organisation review old photographs?

Does the policy cover AI manipulation?

Adults who appear in charity, health or campaign images should be able to ask similar questions. Consent should never be treated as a one-off signature that lasts forever regardless of context.

Do not blame the person in the image

If an image is misused, the person in the image is not responsible.

Not the child.

Not the parent.

Not the school that acted in good faith.

Not the charity that used a photograph responsibly.

Responsibility sits with the perpetrator and with any system that enables abuse to spread without effective prevention, reporting and removal.

That point matters because shame helps abusers. Clear safeguarding language helps victims.

The new safeguarding test

Before publishing an image, every organisation should ask:

Would we still use this photograph if we understood how easily it could be copied, altered, sexualised or used to threaten the person in it?

If the answer raises concern, the image should be reviewed before publication. The organisation should consider whether the person needs to be identifiable, whether a name is necessary, whether the image should be time-limited and whether a safer version would achieve the same purpose.

This is not about fear. It is about judgement.

Children, students, patients, survivors, volunteers and campaigners should not be made invisible. Their stories matter. Their dignity matters. Their safety matters too.

AI abuse in schools has changed the safeguarding landscape. The response should be calm, practical and urgent: not a ban on images, but a higher standard for using them.

FAQs

What is AI abuse in schools?

AI abuse in schools includes the use of artificial intelligence tools to alter, sexualise, impersonate or misuse images, videos or voices connected to pupils, staff or the school community.

Does this mean schools should remove every pupil photo?

No. Schools should take a proportionate approach. They should review images, avoid unnecessary identification, update consent processes and remove images that create avoidable risk.

Can charities still use images of children, volunteers or people they support?

Yes, where image use is necessary, consented to, dignified and reviewed. Charities should avoid unnecessary names, sensitive contexts and images that expose vulnerable people to avoidable risk.

Why is consent not enough on its own?

Consent matters, but it must be informed and current. People should know where images may appear, how long they may remain online, whether names will be used and how they can ask for removal.

Is a fake sexual image still harmful?

Yes. A fake image can still cause fear, shame, bullying, coercion and reputational harm. The image may be artificial, but the impact can be real.

What should parents ask schools?

Parents should ask where images are published, whether names are used, how long images remain online, whether consent can be withdrawn and what the school will do if an image is misused.

Author box

Aneeta Prem MBE is a UK author, human rights campaigner and safeguarding expert. She founded Freedom Charity and has worked for many years on forced marriage, dishonour abuse, FGM, coercion, child protection and the rights of women and girls. Her safeguarding books, But It’s Not Fair and Cut Flowers, are used to help young people and professionals recognise abuse and seek help safely.

Extra Wix link b

Related safeguarding work

arding Standard

By Aneeta Prem MBE
Published: 9 May 2026

Contact me

Get in touch

I'd love to hear from you.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Latest posts

News Articles

Tips, guides, useful information, and the latest news.

Human Rights
6
 min read

Capacity, Consent and Dishonour Abuse

Capacity, Consent and Dishonour Abuse: Why “no capacity means no consent” is the safeguard the world is still failing to apply

Read post

Trigeminal Neuralgia

Trigeminal neuralgia is the most painful condition in the world. So what is Trigeminal Neuralgia ?

Read post

History of Trigeminal Neuralgia

A clear, UK-focused guide to how doctors have understood and treated trigeminal neuralgia over three centuries, and why that history still matters for patients, families and clinicians today.

Read post
Human Rights
5
 min read

AI Abuse in Schools

AI abuse in schools is now a safeguarding issue. Pupil photographs can still celebrate achievement, confidence and school life, but the risk has changed. Images shared on websites, newsletters and social media can now be copied, altered, sexualised or used in blackmail attempts. The answer is not to erase children from public life. The answer is better consent, safer image use, regular review and stronger safeguarding judgment.

Read post

The Hidden Cost of Trigeminal Neuralgia

The Aneeta Prem A to Z Framework on Mental Health, Facial Pain and Quality of Life

Read post
Human Rights
3
 min read

Children Still Missed: 2025 Forced Marriage Figures

What the 2025 forced marriage figures reveal about children, capacity and delayed intervention

Read post